Test Report: Docker_Windows 12425

                    
                      d52130b292d08b0a6095e884aa0df76b8e13fcee:2021-09-15:20469
                    
                

Test fail (16/232)

x
+
TestCertOptions (562.68s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:48: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-20210915033619-22140 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:48: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-20210915033619-22140 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost: (7m43.7340641s)
cert_options_test.go:59: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-20210915033619-22140 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:59: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-20210915033619-22140 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": (6.1057839s)
cert_options_test.go:74: (dbg) Run:  kubectl --context cert-options-20210915033619-22140 config view
cert_options_test.go:79: apiserver server port incorrect. Output of 'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters:\n\t- cluster:\n\t    certificate-authority: C:\\Users\\jenkins\\minikube-integration\\.minikube\\ca.crt\n\t    extensions:\n\t    - extension:\n\t        last-update: Wed, 15 Sep 2021 03:43:28 GMT\n\t        provider: minikube.sigs.k8s.io\n\t        version: v1.23.0\n\t      name: cluster_info\n\t    server: https://localhost:58833\n\t  name: cert-options-20210915033619-22140\n\t- cluster:\n\t    certificate-authority: C:\\Users\\jenkins\\minikube-integration\\.minikube\\ca.crt\n\t    extensions:\n\t    - extension:\n\t        last-update: Wed, 15 Sep 2021 03:43:20 GMT\n\t        provider: minikube.sigs.k8s.io\n\t        version: v1.23.0\n\t      name: cluster_info\n\t    server: https://127.0.0.1:58883\n\t  name: kubernetes-upgrade-20210915032703-22140\n\t- cluster:\n\t    certificate-authority: C:\\Users\\jenkins\\minikube-integration\\.minikube\\ca.crt\n\t    server
: https://127.0.0.1:58908\n\t  name: missing-upgrade-20210915032655-22140\n\t- cluster:\n\t    certificate-authority: C:\\Users\\jenkins\\minikube-integration\\.minikube\\ca.crt\n\t    extensions:\n\t    - extension:\n\t        last-update: Wed, 15 Sep 2021 03:43:40 GMT\n\t        provider: minikube.sigs.k8s.io\n\t        version: v1.23.0\n\t      name: cluster_info\n\t    server: https://127.0.0.1:58839\n\t  name: old-k8s-version-20210915033621-22140\n\t- cluster:\n\t    certificate-authority: C:\\Users\\jenkins\\minikube-integration\\.minikube\\ca.crt\n\t    extensions:\n\t    - extension:\n\t        last-update: Wed, 15 Sep 2021 03:21:11 GMT\n\t        provider: minikube.sigs.k8s.io\n\t        version: v1.23.0\n\t      name: cluster_info\n\t    server: https://127.0.0.1:58454\n\t  name: pause-20210915030944-22140\n\tcontexts:\n\t- context:\n\t    cluster: cert-options-20210915033619-22140\n\t    extensions:\n\t    - extension:\n\t        last-update: Wed, 15 Sep 2021 03:43:28 GMT\n\t        provider: minik
ube.sigs.k8s.io\n\t        version: v1.23.0\n\t      name: context_info\n\t    namespace: default\n\t    user: cert-options-20210915033619-22140\n\t  name: cert-options-20210915033619-22140\n\t- context:\n\t    cluster: kubernetes-upgrade-20210915032703-22140\n\t    extensions:\n\t    - extension:\n\t        last-update: Wed, 15 Sep 2021 03:43:20 GMT\n\t        provider: minikube.sigs.k8s.io\n\t        version: v1.23.0\n\t      name: context_info\n\t    namespace: default\n\t    user: kubernetes-upgrade-20210915032703-22140\n\t  name: kubernetes-upgrade-20210915032703-22140\n\t- context:\n\t    cluster: missing-upgrade-20210915032655-22140\n\t    user: missing-upgrade-20210915032655-22140\n\t  name: missing-upgrade-20210915032655-22140\n\t- context:\n\t    cluster: old-k8s-version-20210915033621-22140\n\t    extensions:\n\t    - extension:\n\t        last-update: Wed, 15 Sep 2021 03:43:40 GMT\n\t        provider: minikube.sigs.k8s.io\n\t        version: v1.23.0\n\t      name: context_info\n\t    namespace: de
fault\n\t    user: old-k8s-version-20210915033621-22140\n\t  name: old-k8s-version-20210915033621-22140\n\t- context:\n\t    cluster: pause-20210915030944-22140\n\t    extensions:\n\t    - extension:\n\t        last-update: Wed, 15 Sep 2021 03:21:11 GMT\n\t        provider: minikube.sigs.k8s.io\n\t        version: v1.23.0\n\t      name: context_info\n\t    namespace: default\n\t    user: pause-20210915030944-22140\n\t  name: pause-20210915030944-22140\n\tcurrent-context: old-k8s-version-20210915033621-22140\n\tkind: Config\n\tpreferences: {}\n\tusers:\n\t- name: cert-options-20210915033619-22140\n\t  user:\n\t    client-certificate: C:\\Users\\jenkins\\minikube-integration\\.minikube\\profiles\\cert-options-20210915033619-22140\\client.crt\n\t    client-key: C:\\Users\\jenkins\\minikube-integration\\.minikube\\profiles\\cert-options-20210915033619-22140\\client.key\n\t- name: kubernetes-upgrade-20210915032703-22140\n\t  user:\n\t    client-certificate: C:\\Users\\jenkins\\minikube-integration\\.minikube\\prof
iles\\kubernetes-upgrade-20210915032703-22140\\client.crt\n\t    client-key: C:\\Users\\jenkins\\minikube-integration\\.minikube\\profiles\\kubernetes-upgrade-20210915032703-22140\\client.key\n\t- name: missing-upgrade-20210915032655-22140\n\t  user:\n\t    client-certificate: C:\\Users\\jenkins\\minikube-integration\\.minikube\\profiles\\missing-upgrade-20210915032655-22140\\client.crt\n\t    client-key: C:\\Users\\jenkins\\minikube-integration\\.minikube\\profiles\\missing-upgrade-20210915032655-22140\\client.key\n\t- name: old-k8s-version-20210915033621-22140\n\t  user:\n\t    client-certificate: C:\\Users\\jenkins\\minikube-integration\\.minikube\\profiles\\old-k8s-version-20210915033621-22140\\client.crt\n\t    client-key: C:\\Users\\jenkins\\minikube-integration\\.minikube\\profiles\\old-k8s-version-20210915033621-22140\\client.key\n\t- name: pause-20210915030944-22140\n\t  user:\n\t    client-certificate: C:\\Users\\jenkins\\minikube-integration\\.minikube\\profiles\\pause-20210915030944-22140\\client.
crt\n\t    client-key: C:\\Users\\jenkins\\minikube-integration\\.minikube\\profiles\\pause-20210915030944-22140\\client.key\n\n-- /stdout --"
cert_options_test.go:82: *** TestCertOptions FAILED at 2021-09-15 03:44:09.7729557 +0000 GMT m=+8151.598442401
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestCertOptions]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect cert-options-20210915033619-22140
helpers_test.go:236: (dbg) docker inspect cert-options-20210915033619-22140:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "16cb0023c6c8f7aafac0e6d2ae0d44c2d47850c73295e34cfcae20dad980a89f",
	        "Created": "2021-09-15T03:36:39.4192249Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 174420,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-09-15T03:36:43.4220138Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:83b5a81388468b1ffcd3874b4f24c1406c63c33ac07797cc8bed6ad0207d36a8",
	        "ResolvConfPath": "/var/lib/docker/containers/16cb0023c6c8f7aafac0e6d2ae0d44c2d47850c73295e34cfcae20dad980a89f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/16cb0023c6c8f7aafac0e6d2ae0d44c2d47850c73295e34cfcae20dad980a89f/hostname",
	        "HostsPath": "/var/lib/docker/containers/16cb0023c6c8f7aafac0e6d2ae0d44c2d47850c73295e34cfcae20dad980a89f/hosts",
	        "LogPath": "/var/lib/docker/containers/16cb0023c6c8f7aafac0e6d2ae0d44c2d47850c73295e34cfcae20dad980a89f/16cb0023c6c8f7aafac0e6d2ae0d44c2d47850c73295e34cfcae20dad980a89f-json.log",
	        "Name": "/cert-options-20210915033619-22140",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "cert-options-20210915033619-22140:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "cert-options-20210915033619-22140",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8555/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1d92d4649d4a433116cd85986c2a0a90f64abd137bb3758c87f81abd4a45f810-init/diff:/var/lib/docker/overlay2/81b5ed92bfb1e2a2a0e307c706b587bea810390dd4cdeffdaab53cb2bea532a6/diff:/var/lib/docker/overlay2/9560b70ae747eb38506ca99f7bdf1b19d69a399aa855bf6d066d5631b126dae0/diff:/var/lib/docker/overlay2/695fbfd66132a632f9cf21a1dbf1c4585ecf3d79d4ec664dc7322dbe57733e22/diff:/var/lib/docker/overlay2/db1f669858e6abde6d71803adf0e4dab516d446780d5e6b1fa82ed6e2c992d39/diff:/var/lib/docker/overlay2/fab89974c291c465525b131b7fd3c3d267c0435e58b67e536b1f5e99b0fe3552/diff:/var/lib/docker/overlay2/7d5946148c5ebf869abcd61af8cbd81254b96679a59bff1399fa76d06f970a03/diff:/var/lib/docker/overlay2/ac34ffb8ff292d487d8e0007c602732cac31fc43cc9dd73014f4f7f6731002e4/diff:/var/lib/docker/overlay2/c79772dfc8b60a34db55f8f7bdd7eb21bdb2ae1ebae9e19320eb82d243476de1/diff:/var/lib/docker/overlay2/5f0227571cb11adf4a20233b21288f6215d7ee4baa55da18a29c55f255c3f91b/diff:/var/lib/docker/overlay2/8f8a0a
55c9a3d7643b70fafbe1d581deef7a9142bb7504cade2efea33d17c8b6/diff:/var/lib/docker/overlay2/855d9e351347b1bfa0c8fcdd68ca509489970443ce6ac3f078a84319bbdbb0de/diff:/var/lib/docker/overlay2/d6da6485052539019c636fe8ca30537f92704bc855db6bb09a9228e17d5e5ee1/diff:/var/lib/docker/overlay2/3a712bb22c438ea19740b4d19771cd31cbd08e2f23647daf15e09967798d671d/diff:/var/lib/docker/overlay2/e8f4cc7b40bc0b3a9e62ea0d4f5ca169aab3e908980e13c881a98909769e05a7/diff:/var/lib/docker/overlay2/7364b0516116b13f8d51a574ea9312cc8be87bf0923e8ebe0018085133e57195/diff:/var/lib/docker/overlay2/10d8c9ca18bc3463470c25ce09aa92dc1df0366115c9fd5a22e67d1369e27b72/diff:/var/lib/docker/overlay2/e8ad5dbce212f833465ffdc136c8c744beb3bfe489d7f20f82084f854ab617cd/diff:/var/lib/docker/overlay2/391d7b820cdbb31a7bcc9bd350aff08e83bc2f5083fa09d2d7c1db69d1861b08/diff:/var/lib/docker/overlay2/394198ca9ba772f189cefae2c09414df3798734482a0159958ad4c74374079e8/diff:/var/lib/docker/overlay2/c3620c3c820e1cc79a02390c9ede0beacdc7fe42aa0e9564d27d6c793741eafe/diff:/var/lib/d
ocker/overlay2/9b11f1c010dca16f2c216392f2d3c5ec585e7d2ca91eb0a4824410accaba4ef3/diff:/var/lib/docker/overlay2/d8e94cabdfcf34c1c2ecb5355519daea41ba85e90131944f14c6c5faadb3f538/diff:/var/lib/docker/overlay2/335c17cc3e6bcc49659f681fefa84f63f496fab770f62dd31577690f8e3958b6/diff:/var/lib/docker/overlay2/5ef44871aef3ad96e532fdbc78e5379afd65c7ffd39bed734ed35daf134257b5/diff:/var/lib/docker/overlay2/ce73bde16589364238c0bb925bbd93f9b2b9c5e2f3267cc196298f62fbc08342/diff:/var/lib/docker/overlay2/461113b8bc693d226593885e543b82eac9a75ea77d0bcdaa60551cca12495538/diff:/var/lib/docker/overlay2/f7d47793cf5882d3e0b92ebb0d7d2456fc621d6db83cb2439f96c4b248b11d25/diff:/var/lib/docker/overlay2/a8e74e4377f38c1a50d9a335bfc92405a4df112abdcbd2555cbe3b592f071fd5/diff:/var/lib/docker/overlay2/405812e0a303b666cd7c1c0102d8f415494b9641e1f5ab9404e146c2265592cb/diff:/var/lib/docker/overlay2/deecfc978d174b5d2c0a209b450d0fa15828234099690cc9092c6ff67a1926d2/diff:/var/lib/docker/overlay2/6fa41c9e75c99fb82729fdd55e5653ce5b7edf256a1dd8791c3012cf210
7f486/diff:/var/lib/docker/overlay2/2dd2dde99da44abd645912f40fdb7d06e201a622cccf049222fa9a53ab6ca234/diff:/var/lib/docker/overlay2/a73187a91c6737ec4627be55f4b58dab9d4ef30412857cbf1cd6e6778962c9f4/diff:/var/lib/docker/overlay2/7fcd2796c0a1717ddf6c90aad88aff2e11a87b836d8761e756b6bc7a292ed570/diff:/var/lib/docker/overlay2/276597df229fc32d0d371563f135664fa4bef3fbc20372998b7b051504e6188a/diff:/var/lib/docker/overlay2/28f6cf4ea77b5f1df2373079b5b3c9b2ec7e95488cec51c54e7ff22f8fea2f36/diff:/var/lib/docker/overlay2/301627855ef95ac8b04f9b404290e80b6a94b9637ec2ca0c31b5701c6ac786fd/diff:/var/lib/docker/overlay2/a589a72c723642d2bb727fead8edfcaffaca10eed1bb4af32fac19fb6fc32874/diff:/var/lib/docker/overlay2/90d1c9e6fe8a1c74ac53d78f9a0b7ee36fc624becac59c2a6056c004ebe45e1e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1d92d4649d4a433116cd85986c2a0a90f64abd137bb3758c87f81abd4a45f810/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1d92d4649d4a433116cd85986c2a0a90f64abd137bb3758c87f81abd4a45f810/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1d92d4649d4a433116cd85986c2a0a90f64abd137bb3758c87f81abd4a45f810/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "cert-options-20210915033619-22140",
	                "Source": "/var/lib/docker/volumes/cert-options-20210915033619-22140/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "cert-options-20210915033619-22140",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8555/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "cert-options-20210915033619-22140",
	                "name.minikube.sigs.k8s.io": "cert-options-20210915033619-22140",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "faddf7fbefe21a28a0248b2ee72634bc1121120a17441089fffa252b1e5027cf",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58834"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58830"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58831"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58832"
	                    }
	                ],
	                "8555/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58833"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/faddf7fbefe2",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "cert-options-20210915033619-22140": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "16cb0023c6c8",
	                        "cert-options-20210915033619-22140"
	                    ],
	                    "NetworkID": "3813cf20a9594cef7b7cdde6d44e1b11f2701ba643a655276e6377e8398861d5",
	                    "EndpointID": "d005455b98f5d2c105220c19280648e69f992223f3d3643d733b92a2ad5c2470",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p cert-options-20210915033619-22140 -n cert-options-20210915033619-22140
helpers_test.go:240: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p cert-options-20210915033619-22140 -n cert-options-20210915033619-22140: (8.5369469s)
helpers_test.go:245: <<< TestCertOptions FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestCertOptions]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-20210915033619-22140 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-20210915033619-22140 logs -n 25: (41.1543122s)
helpers_test.go:253: TestCertOptions logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------|-----------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	| Command |                  Args                   |                 Profile                 |          User           | Version |          Start Time           |           End Time            |
	|---------|-----------------------------------------|-----------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	| start   | -p pause-20210915030944-22140           | pause-20210915030944-22140              | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 03:20:10 GMT | Wed, 15 Sep 2021 03:21:42 GMT |
	|         | --alsologtostderr -v=1                  |                                         |                         |         |                               |                               |
	|         | --driver=docker                         |                                         |                         |         |                               |                               |
	| pause   | -p pause-20210915030944-22140           | pause-20210915030944-22140              | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 03:21:43 GMT | Wed, 15 Sep 2021 03:22:01 GMT |
	|         | --alsologtostderr -v=5                  |                                         |                         |         |                               |                               |
	| unpause | -p pause-20210915030944-22140           | pause-20210915030944-22140              | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 03:22:46 GMT | Wed, 15 Sep 2021 03:22:59 GMT |
	|         | --alsologtostderr -v=5                  |                                         |                         |         |                               |                               |
	| pause   | -p pause-20210915030944-22140           | pause-20210915030944-22140              | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 03:22:59 GMT | Wed, 15 Sep 2021 03:23:23 GMT |
	|         | --alsologtostderr -v=5                  |                                         |                         |         |                               |                               |
	| start   | -p                                      | stopped-upgrade-20210915030944-22140    | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 03:22:19 GMT | Wed, 15 Sep 2021 03:26:03 GMT |
	|         | stopped-upgrade-20210915030944-22140    |                                         |                         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr -v=1    |                                         |                         |         |                               |                               |
	|         | --driver=docker                         |                                         |                         |         |                               |                               |
	| start   | -p                                      | running-upgrade-20210915030944-22140    | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 03:22:40 GMT | Wed, 15 Sep 2021 03:26:17 GMT |
	|         | running-upgrade-20210915030944-22140    |                                         |                         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr -v=1    |                                         |                         |         |                               |                               |
	|         | --driver=docker                         |                                         |                         |         |                               |                               |
	| logs    | -p                                      | stopped-upgrade-20210915030944-22140    | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 03:26:03 GMT | Wed, 15 Sep 2021 03:26:23 GMT |
	|         | stopped-upgrade-20210915030944-22140    |                                         |                         |         |                               |                               |
	| start   | -p                                      | force-systemd-flag-20210915032047-22140 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 03:20:47 GMT | Wed, 15 Sep 2021 03:26:47 GMT |
	|         | force-systemd-flag-20210915032047-22140 |                                         |                         |         |                               |                               |
	|         | --memory=2048 --force-systemd           |                                         |                         |         |                               |                               |
	|         | --alsologtostderr -v=5 --driver=docker  |                                         |                         |         |                               |                               |
	| delete  | -p                                      | running-upgrade-20210915030944-22140    | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 03:26:17 GMT | Wed, 15 Sep 2021 03:26:50 GMT |
	|         | running-upgrade-20210915030944-22140    |                                         |                         |         |                               |                               |
	| -p      | force-systemd-flag-20210915032047-22140 | force-systemd-flag-20210915032047-22140 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 03:26:48 GMT | Wed, 15 Sep 2021 03:26:54 GMT |
	|         | ssh docker info --format                |                                         |                         |         |                               |                               |
	|         | {{.CgroupDriver}}                       |                                         |                         |         |                               |                               |
	| delete  | -p                                      | stopped-upgrade-20210915030944-22140    | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 03:26:24 GMT | Wed, 15 Sep 2021 03:26:54 GMT |
	|         | stopped-upgrade-20210915030944-22140    |                                         |                         |         |                               |                               |
	| delete  | -p                                      | flannel-20210915032655-22140            | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 03:26:55 GMT | Wed, 15 Sep 2021 03:27:03 GMT |
	|         | flannel-20210915032655-22140            |                                         |                         |         |                               |                               |
	| delete  | -p                                      | force-systemd-flag-20210915032047-22140 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 03:26:54 GMT | Wed, 15 Sep 2021 03:27:27 GMT |
	|         | force-systemd-flag-20210915032047-22140 |                                         |                         |         |                               |                               |
	| start   | -p                                      | force-systemd-env-20210915032650-22140  | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 03:26:50 GMT | Wed, 15 Sep 2021 03:35:36 GMT |
	|         | force-systemd-env-20210915032650-22140  |                                         |                         |         |                               |                               |
	|         | --memory=2048 --alsologtostderr -v=5    |                                         |                         |         |                               |                               |
	|         | --driver=docker                         |                                         |                         |         |                               |                               |
	| start   | -p                                      | docker-flags-20210915032727-22140       | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 03:27:27 GMT | Wed, 15 Sep 2021 03:35:42 GMT |
	|         | docker-flags-20210915032727-22140       |                                         |                         |         |                               |                               |
	|         | --cache-images=false                    |                                         |                         |         |                               |                               |
	|         | --memory=2048                           |                                         |                         |         |                               |                               |
	|         | --install-addons=false                  |                                         |                         |         |                               |                               |
	|         | --wait=false --docker-env=FOO=BAR       |                                         |                         |         |                               |                               |
	|         | --docker-env=BAZ=BAT                    |                                         |                         |         |                               |                               |
	|         | --docker-opt=debug                      |                                         |                         |         |                               |                               |
	|         | --docker-opt=icc=true                   |                                         |                         |         |                               |                               |
	|         | --alsologtostderr -v=5                  |                                         |                         |         |                               |                               |
	|         | --driver=docker                         |                                         |                         |         |                               |                               |
	| -p      | force-systemd-env-20210915032650-22140  | force-systemd-env-20210915032650-22140  | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 03:35:37 GMT | Wed, 15 Sep 2021 03:35:46 GMT |
	|         | ssh docker info --format                |                                         |                         |         |                               |                               |
	|         | {{.CgroupDriver}}                       |                                         |                         |         |                               |                               |
	| -p      | docker-flags-20210915032727-22140       | docker-flags-20210915032727-22140       | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 03:35:42 GMT | Wed, 15 Sep 2021 03:35:49 GMT |
	|         | ssh sudo systemctl show docker          |                                         |                         |         |                               |                               |
	|         | --property=Environment --no-pager       |                                         |                         |         |                               |                               |
	| -p      | docker-flags-20210915032727-22140       | docker-flags-20210915032727-22140       | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 03:35:50 GMT | Wed, 15 Sep 2021 03:35:54 GMT |
	|         | ssh sudo systemctl show docker          |                                         |                         |         |                               |                               |
	|         | --property=ExecStart --no-pager         |                                         |                         |         |                               |                               |
	| delete  | -p                                      | force-systemd-env-20210915032650-22140  | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 03:35:47 GMT | Wed, 15 Sep 2021 03:36:19 GMT |
	|         | force-systemd-env-20210915032650-22140  |                                         |                         |         |                               |                               |
	| delete  | -p                                      | docker-flags-20210915032727-22140       | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 03:35:55 GMT | Wed, 15 Sep 2021 03:36:21 GMT |
	|         | docker-flags-20210915032727-22140       |                                         |                         |         |                               |                               |
	| start   | -p                                      | kubernetes-upgrade-20210915032703-22140 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 03:27:04 GMT | Wed, 15 Sep 2021 03:37:04 GMT |
	|         | kubernetes-upgrade-20210915032703-22140 |                                         |                         |         |                               |                               |
	|         | --memory=2200                           |                                         |                         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0            |                                         |                         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker  |                                         |                         |         |                               |                               |
	| stop    | -p                                      | kubernetes-upgrade-20210915032703-22140 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 03:37:04 GMT | Wed, 15 Sep 2021 03:37:18 GMT |
	|         | kubernetes-upgrade-20210915032703-22140 |                                         |                         |         |                               |                               |
	| start   | -p                                      | kubernetes-upgrade-20210915032703-22140 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 03:37:20 GMT | Wed, 15 Sep 2021 03:43:53 GMT |
	|         | kubernetes-upgrade-20210915032703-22140 |                                         |                         |         |                               |                               |
	|         | --memory=2200                           |                                         |                         |         |                               |                               |
	|         | --kubernetes-version=v1.22.2-rc.0       |                                         |                         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker  |                                         |                         |         |                               |                               |
	| start   | -p                                      | cert-options-20210915033619-22140       | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 03:36:20 GMT | Wed, 15 Sep 2021 03:44:03 GMT |
	|         | cert-options-20210915033619-22140       |                                         |                         |         |                               |                               |
	|         | --memory=2048                           |                                         |                         |         |                               |                               |
	|         | --apiserver-ips=127.0.0.1               |                                         |                         |         |                               |                               |
	|         | --apiserver-ips=192.168.15.15           |                                         |                         |         |                               |                               |
	|         | --apiserver-names=localhost             |                                         |                         |         |                               |                               |
	|         | --apiserver-names=www.google.com        |                                         |                         |         |                               |                               |
	|         | --apiserver-port=8555                   |                                         |                         |         |                               |                               |
	|         | --driver=docker                         |                                         |                         |         |                               |                               |
	|         | --apiserver-name=localhost              |                                         |                         |         |                               |                               |
	| -p      | cert-options-20210915033619-22140       | cert-options-20210915033619-22140       | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 03:44:03 GMT | Wed, 15 Sep 2021 03:44:09 GMT |
	|         | ssh openssl x509 -text -noout -in       |                                         |                         |         |                               |                               |
	|         | /var/lib/minikube/certs/apiserver.crt   |                                         |                         |         |                               |                               |
	|---------|-----------------------------------------|-----------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/09/15 03:43:54
	Running on machine: windows-server-1
	Binary: Built with gc go1.17 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0915 03:43:54.969151    1756 out.go:298] Setting OutFile to fd 1236 ...
	I0915 03:43:54.970184    1756 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0915 03:43:54.970184    1756 out.go:311] Setting ErrFile to fd 972...
	I0915 03:43:54.970184    1756 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0915 03:43:54.999383    1756 out.go:305] Setting JSON to false
	I0915 03:43:55.012211    1756 start.go:111] hostinfo: {"hostname":"windows-server-1","uptime":10280217,"bootTime":1621397217,"procs":159,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9879231f-6171-435d-bab4-5b366cc6391b"}
	W0915 03:43:55.013047    1756 start.go:119] gopshost.Virtualization returned error: not implemented yet
	I0915 03:43:55.021358    1756 out.go:177] * [kubernetes-upgrade-20210915032703-22140] minikube v1.23.0 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	I0915 03:43:55.021689    1756 notify.go:169] Checking for updates...
	I0915 03:43:55.033155    1756 out.go:177]   - KUBECONFIG=C:\Users\jenkins\minikube-integration\kubeconfig
	I0915 03:43:55.035750    1756 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins\minikube-integration\.minikube
	I0915 03:43:52.053482   50104 pod_ready.go:102] pod "coredns-fb8b8dccf-kkhkd" in "kube-system" namespace has status "Ready":"False"
	I0915 03:43:54.115114   50104 pod_ready.go:102] pod "coredns-fb8b8dccf-kkhkd" in "kube-system" namespace has status "Ready":"False"
	I0915 03:43:56.542703   50104 pod_ready.go:102] pod "coredns-fb8b8dccf-kkhkd" in "kube-system" namespace has status "Ready":"False"
	I0915 03:43:55.037883    1756 out.go:177]   - MINIKUBE_LOCATION=12425
	I0915 03:43:55.039125    1756 config.go:177] Loaded profile config "kubernetes-upgrade-20210915032703-22140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.2-rc.0
	I0915 03:43:55.040366    1756 driver.go:343] Setting default libvirt URI to qemu:///system
	I0915 03:43:55.763900    1756 docker.go:132] docker version: linux-20.10.5
	I0915 03:43:55.784660    1756 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0915 03:43:57.120738    1756 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.3351921s)
	I0915 03:43:57.121592    1756 info.go:263] docker info: {ID:6FWJ:GOIP:3UEJ:4EPV:BN5V:RES7:2PF6:QH5I:B3LP:YJP2:JQEM:LHWQ Containers:5 ContainersRunning:5 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:84 OomKillDisable:true NGoroutines:79 SystemTime:2021-09-15 03:43:56.5156682 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0915 03:43:53.398453   38996 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 03:43:53.911054   38996 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 03:43:54.397390   38996 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 03:43:54.898883   38996 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 03:43:55.403816   38996 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 03:43:55.904422   38996 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 03:43:56.400300   38996 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 03:43:56.902021   38996 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 03:43:57.400219   38996 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 03:43:57.899254   38996 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 03:43:57.130668    1756 out.go:177] * Using the docker driver based on existing profile
	I0915 03:43:57.130996    1756 start.go:278] selected driver: docker
	I0915 03:43:57.131218    1756 start.go:751] validating driver "docker" against &{Name:kubernetes-upgrade-20210915032703-22140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.2-rc.0 ClusterName:kubernetes-upgrade-20210915032703-22140 Names
pace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.22.2-rc.0 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volume
snapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0915 03:43:57.131441    1756 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0915 03:43:57.245402    1756 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0915 03:43:58.383076    1756 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.1370899s)
	I0915 03:43:58.383853    1756 info.go:263] docker info: {ID:6FWJ:GOIP:3UEJ:4EPV:BN5V:RES7:2PF6:QH5I:B3LP:YJP2:JQEM:LHWQ Containers:5 ContainersRunning:5 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:84 OomKillDisable:true NGoroutines:79 SystemTime:2021-09-15 03:43:57.8767429 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0915 03:43:58.384801    1756 cni.go:93] Creating CNI manager for ""
	I0915 03:43:58.384801    1756 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0915 03:43:58.384801    1756 start_flags.go:278] config:
	{Name:kubernetes-upgrade-20210915032703-22140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.2-rc.0 ClusterName:kubernetes-upgrade-20210915032703-22140 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomai
n:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.22.2-rc.0 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyCompo
nents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0915 03:43:58.389389    1756 out.go:177] * Starting control plane node kubernetes-upgrade-20210915032703-22140 in cluster kubernetes-upgrade-20210915032703-22140
	I0915 03:43:58.389808    1756 cache.go:118] Beginning downloading kic base image for docker with docker
	I0915 03:43:58.392241    1756 out.go:177] * Pulling base image ...
	I0915 03:43:58.392715    1756 preload.go:131] Checking if preload exists for k8s version v1.22.2-rc.0 and runtime docker
	I0915 03:43:58.392715    1756 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 in local docker daemon
	I0915 03:43:58.393241    1756 preload.go:147] Found local preload: C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v12-v1.22.2-rc.0-docker-overlay2-amd64.tar.lz4
	I0915 03:43:58.393412    1756 cache.go:57] Caching tarball of preloaded images
	I0915 03:43:58.394644    1756 preload.go:173] Found C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v12-v1.22.2-rc.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0915 03:43:58.394924    1756 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.2-rc.0 on docker
	I0915 03:43:58.395341    1756 profile.go:148] Saving config to C:\Users\jenkins\minikube-integration\.minikube\profiles\kubernetes-upgrade-20210915032703-22140\config.json ...
	I0915 03:43:59.171890    1756 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 in local docker daemon, skipping pull
	I0915 03:43:59.171890    1756 cache.go:140] gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 exists in daemon, skipping load
	I0915 03:43:59.172210    1756 cache.go:206] Successfully downloaded all kic artifacts
	I0915 03:43:59.172938    1756 start.go:313] acquiring machines lock for kubernetes-upgrade-20210915032703-22140: {Name:mk7af999dfd2d3dc1bf447f052ae1725ebafaa2f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 03:43:59.173452    1756 start.go:317] acquired machines lock for "kubernetes-upgrade-20210915032703-22140" in 261.8µs
	I0915 03:43:59.173761    1756 start.go:93] Skipping create...Using existing machine configuration
	I0915 03:43:59.173761    1756 fix.go:55] fixHost starting: 
	I0915 03:43:59.209526    1756 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20210915032703-22140 --format={{.State.Status}}
	I0915 03:44:00.041690    1756 fix.go:108] recreateIfNeeded on kubernetes-upgrade-20210915032703-22140: state=Running err=<nil>
	W0915 03:44:00.042108    1756 fix.go:134] unexpected machine state, will restart: <nil>
	I0915 03:43:58.656965   50104 pod_ready.go:102] pod "coredns-fb8b8dccf-kkhkd" in "kube-system" namespace has status "Ready":"False"
	I0915 03:44:01.070043   50104 pod_ready.go:102] pod "coredns-fb8b8dccf-kkhkd" in "kube-system" namespace has status "Ready":"False"
	I0915 03:44:02.456932   26504 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (26.4618059s)
	I0915 03:44:02.456932   26504 ssh_runner.go:192] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.22.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (26.2801787s)
	I0915 03:44:02.456932   26504 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (26.9669015s)
	I0915 03:44:02.456932   26504 start.go:729] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0915 03:44:02.456932   26504 ssh_runner.go:192] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (25.4825054s)
	I0915 03:44:02.456932   26504 api_server.go:70] duration metric: took 32.5831642s to wait for apiserver process to appear ...
	I0915 03:44:02.456932   26504 api_server.go:86] waiting for apiserver healthz status ...
	I0915 03:44:02.456932   26504 api_server.go:239] Checking apiserver healthz at https://localhost:58833/healthz ...
	I0915 03:43:58.420345   38996 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 03:43:58.898663   38996 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 03:43:59.402714   38996 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 03:43:59.898991   38996 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 03:44:00.406499   38996 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 03:44:00.901083   38996 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 03:44:01.399445   38996 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 03:44:01.904689   38996 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 03:44:02.399301   38996 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 03:44:02.909626   38996 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 03:44:02.464675   26504 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0915 03:44:02.464815   26504 addons.go:406] enableAddons completed in 32.5910465s
	I0915 03:44:02.848990   26504 api_server.go:265] https://localhost:58833/healthz returned 200:
	ok
	I0915 03:44:02.865623   26504 api_server.go:139] control plane version: v1.22.1
	I0915 03:44:02.865623   26504 api_server.go:129] duration metric: took 408.6924ms to wait for apiserver health ...
	I0915 03:44:02.865878   26504 system_pods.go:43] waiting for kube-system pods to appear ...
	I0915 03:44:02.962446   26504 system_pods.go:59] 7 kube-system pods found
	I0915 03:44:02.962446   26504 system_pods.go:61] "coredns-78fcd69978-cb8sj" [6c354c34-6a2e-45c2-ab92-5442b51a8b82] Pending
	I0915 03:44:02.962446   26504 system_pods.go:61] "etcd-cert-options-20210915033619-22140" [89fbf87c-fa61-4ec0-925f-fc79faed7c0f] Pending
	I0915 03:44:02.962446   26504 system_pods.go:61] "kube-apiserver-cert-options-20210915033619-22140" [1626db0a-3007-4623-b52d-3d23496cfc78] Pending
	I0915 03:44:02.962446   26504 system_pods.go:61] "kube-controller-manager-cert-options-20210915033619-22140" [1e7ecd28-87dc-4044-9b22-dc9ec8a0030f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0915 03:44:02.962446   26504 system_pods.go:61] "kube-proxy-wmzsf" [cea7699e-ab59-4817-88b4-afda2970424d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0915 03:44:02.962628   26504 system_pods.go:61] "kube-scheduler-cert-options-20210915033619-22140" [278a8107-bab1-4c53-8aec-710e46a04b4c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0915 03:44:02.962628   26504 system_pods.go:61] "storage-provisioner" [42ce7029-cc6b-4bb9-b68b-befa68dc6449] Pending
	I0915 03:44:02.962628   26504 system_pods.go:74] duration metric: took 96.75ms to wait for pod list to return data ...
	I0915 03:44:02.962628   26504 kubeadm.go:547] duration metric: took 33.0888611s to wait for : map[apiserver:true system_pods:true] ...
	I0915 03:44:02.962628   26504 node_conditions.go:102] verifying NodePressure condition ...
	I0915 03:44:03.106433   26504 node_conditions.go:122] node storage ephemeral capacity is 65792556Ki
	I0915 03:44:03.106511   26504 node_conditions.go:123] node cpu capacity is 4
	I0915 03:44:03.106511   26504 node_conditions.go:105] duration metric: took 143.8843ms to run NodePressure ...
	I0915 03:44:03.106511   26504 start.go:231] waiting for startup goroutines ...
	I0915 03:44:03.306110   26504 start.go:462] kubectl: 1.20.0, cluster: 1.22.1 (minor skew: 2)
	I0915 03:44:03.308783   26504 out.go:177] 
	W0915 03:44:03.308783   26504 out.go:242] ! C:\Program Files\Docker\Docker\resources\bin\kubectl.exe is version 1.20.0, which may have incompatibilites with Kubernetes 1.22.1.
	I0915 03:44:03.311917   26504 out.go:177]   - Want kubectl v1.22.1? Try 'minikube kubectl -- get pods -A'
	I0915 03:44:03.314869   26504 out.go:177] * Done! kubectl is now configured to use "cert-options-20210915033619-22140" cluster and "default" namespace by default
	I0915 03:44:00.045099    1756 out.go:177] * Updating the running docker "kubernetes-upgrade-20210915032703-22140" container ...
	I0915 03:44:00.045316    1756 machine.go:88] provisioning docker machine ...
	I0915 03:44:00.045500    1756 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-20210915032703-22140"
	I0915 03:44:00.054068    1756 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210915032703-22140
	I0915 03:44:00.867839    1756 main.go:130] libmachine: Using SSH client type: native
	I0915 03:44:00.867839    1756 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x158acc0] 0x158db80 <nil>  [] 0s} 127.0.0.1 58879 <nil> <nil>}
	I0915 03:44:00.867839    1756 main.go:130] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-20210915032703-22140 && echo "kubernetes-upgrade-20210915032703-22140" | sudo tee /etc/hostname
	I0915 03:44:03.862282    1756 main.go:130] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-20210915032703-22140
	
	I0915 03:44:03.874249    1756 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210915032703-22140
	I0915 03:44:03.599527   50104 pod_ready.go:102] pod "coredns-fb8b8dccf-kkhkd" in "kube-system" namespace has status "Ready":"False"
	I0915 03:44:05.815901   50104 pod_ready.go:102] pod "coredns-fb8b8dccf-kkhkd" in "kube-system" namespace has status "Ready":"False"
	I0915 03:44:03.400451   38996 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 03:44:03.899950   38996 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 03:44:04.905930   38996 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 03:44:05.412458   38996 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 03:44:05.904122   38996 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 03:44:06.417752   38996 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 03:44:06.905514   38996 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 03:44:07.400071   38996 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 03:44:07.897716   38996 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 03:44:04.749155    1756 main.go:130] libmachine: Using SSH client type: native
	I0915 03:44:04.749969    1756 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x158acc0] 0x158db80 <nil>  [] 0s} 127.0.0.1 58879 <nil> <nil>}
	I0915 03:44:04.749969    1756 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-20210915032703-22140' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-20210915032703-22140/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-20210915032703-22140' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0915 03:44:07.590989    1756 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0915 03:44:07.591162    1756 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins\minikube-integration\.minikube CaCertPath:C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins\minikube-integration\.minikube}
	I0915 03:44:07.591162    1756 ubuntu.go:177] setting up certificates
	I0915 03:44:07.591162    1756 provision.go:83] configureAuth start
	I0915 03:44:07.602203    1756 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20210915032703-22140
	I0915 03:44:08.416054    1756 provision.go:138] copyHostCerts
	I0915 03:44:08.416834    1756 exec_runner.go:145] found C:\Users\jenkins\minikube-integration\.minikube/key.pem, removing ...
	I0915 03:44:08.416834    1756 exec_runner.go:208] rm: C:\Users\jenkins\minikube-integration\.minikube\key.pem
	I0915 03:44:08.417446    1756 exec_runner.go:152] cp: C:\Users\jenkins\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins\minikube-integration\.minikube/key.pem (1679 bytes)
	I0915 03:44:08.420621    1756 exec_runner.go:145] found C:\Users\jenkins\minikube-integration\.minikube/ca.pem, removing ...
	I0915 03:44:08.420621    1756 exec_runner.go:208] rm: C:\Users\jenkins\minikube-integration\.minikube\ca.pem
	I0915 03:44:08.420621    1756 exec_runner.go:152] cp: C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0915 03:44:08.422626    1756 exec_runner.go:145] found C:\Users\jenkins\minikube-integration\.minikube/cert.pem, removing ...
	I0915 03:44:08.422626    1756 exec_runner.go:208] rm: C:\Users\jenkins\minikube-integration\.minikube\cert.pem
	I0915 03:44:08.422626    1756 exec_runner.go:152] cp: C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0915 03:44:08.423620    1756 provision.go:112] generating server cert: C:\Users\jenkins\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kubernetes-upgrade-20210915032703-22140 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-20210915032703-22140]
	I0915 03:44:09.014581    1756 provision.go:172] copyRemoteCerts
	I0915 03:44:09.028183    1756 ssh_runner.go:152] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0915 03:44:09.044551    1756 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210915032703-22140
	I0915 03:44:08.159118   50104 pod_ready.go:102] pod "coredns-fb8b8dccf-kkhkd" in "kube-system" namespace has status "Ready":"False"
	I0915 03:44:10.252438   50104 pod_ready.go:102] pod "coredns-fb8b8dccf-kkhkd" in "kube-system" namespace has status "Ready":"False"
	I0915 03:44:08.414520   38996 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 03:44:08.896684   38996 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 03:44:09.411423   38996 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 03:44:09.902716   38996 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 03:44:10.401759   38996 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 03:44:11.396529   38996 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 03:44:12.398080   38996 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 03:44:12.900263   38996 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 03:44:09.961998    1756 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58879 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\kubernetes-upgrade-20210915032703-22140\id_rsa Username:docker}
	I0915 03:44:11.208395    1756 ssh_runner.go:192] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (2.179652s)
	I0915 03:44:11.208655    1756 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0915 03:44:12.740414    1756 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1289 bytes)
	I0915 03:44:13.733988    1756 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0915 03:44:14.463868    1756 provision.go:86] duration metric: configureAuth took 6.8727314s
	I0915 03:44:14.463868    1756 ubuntu.go:193] setting minikube options for container-runtime
	I0915 03:44:14.464774    1756 config.go:177] Loaded profile config "kubernetes-upgrade-20210915032703-22140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.2-rc.0
	I0915 03:44:14.474984    1756 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210915032703-22140
	I0915 03:44:12.690601   50104 pod_ready.go:102] pod "coredns-fb8b8dccf-kkhkd" in "kube-system" namespace has status "Ready":"False"
	I0915 03:44:15.085391   50104 pod_ready.go:102] pod "coredns-fb8b8dccf-kkhkd" in "kube-system" namespace has status "Ready":"False"
	I0915 03:44:13.406216   38996 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 03:44:13.900518   38996 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 03:44:14.399727   38996 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 03:44:14.898891   38996 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 03:44:15.410130   38996 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 03:44:16.398867   38996 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 03:44:16.898823   38996 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 03:44:17.907042   38996 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 03:44:15.304811    1756 main.go:130] libmachine: Using SSH client type: native
	I0915 03:44:15.304811    1756 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x158acc0] 0x158db80 <nil>  [] 0s} 127.0.0.1 58879 <nil> <nil>}
	I0915 03:44:15.304811    1756 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0915 03:44:17.573161    1756 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0915 03:44:17.573161    1756 ubuntu.go:71] root file system type: overlay
	I0915 03:44:17.574162    1756 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0915 03:44:17.583142    1756 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210915032703-22140
	I0915 03:44:18.416434    1756 main.go:130] libmachine: Using SSH client type: native
	I0915 03:44:18.417217    1756 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x158acc0] 0x158db80 <nil>  [] 0s} 127.0.0.1 58879 <nil> <nil>}
	I0915 03:44:18.417217    1756 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2021-09-15 03:36:46 UTC, end at Wed 2021-09-15 03:44:42 UTC. --
	Sep 15 03:41:10 cert-options-20210915033619-22140 dockerd[467]: time="2021-09-15T03:41:10.788002700Z" level=info msg="Processing signal 'terminated'"
	Sep 15 03:41:10 cert-options-20210915033619-22140 dockerd[467]: time="2021-09-15T03:41:10.850020900Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 15 03:41:10 cert-options-20210915033619-22140 dockerd[467]: time="2021-09-15T03:41:10.862041500Z" level=info msg="Daemon shutdown complete"
	Sep 15 03:41:10 cert-options-20210915033619-22140 systemd[1]: docker.service: Succeeded.
	Sep 15 03:41:10 cert-options-20210915033619-22140 systemd[1]: Stopped Docker Application Container Engine.
	Sep 15 03:41:10 cert-options-20210915033619-22140 systemd[1]: Starting Docker Application Container Engine...
	Sep 15 03:41:11 cert-options-20210915033619-22140 dockerd[780]: time="2021-09-15T03:41:11.387850600Z" level=info msg="Starting up"
	Sep 15 03:41:11 cert-options-20210915033619-22140 dockerd[780]: time="2021-09-15T03:41:11.398818800Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Sep 15 03:41:11 cert-options-20210915033619-22140 dockerd[780]: time="2021-09-15T03:41:11.399064500Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Sep 15 03:41:11 cert-options-20210915033619-22140 dockerd[780]: time="2021-09-15T03:41:11.400034600Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Sep 15 03:41:11 cert-options-20210915033619-22140 dockerd[780]: time="2021-09-15T03:41:11.400579000Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Sep 15 03:41:11 cert-options-20210915033619-22140 dockerd[780]: time="2021-09-15T03:41:11.415935400Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Sep 15 03:41:11 cert-options-20210915033619-22140 dockerd[780]: time="2021-09-15T03:41:11.434653200Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Sep 15 03:41:11 cert-options-20210915033619-22140 dockerd[780]: time="2021-09-15T03:41:11.434835600Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Sep 15 03:41:11 cert-options-20210915033619-22140 dockerd[780]: time="2021-09-15T03:41:11.434890800Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Sep 15 03:41:11 cert-options-20210915033619-22140 dockerd[780]: time="2021-09-15T03:41:11.899922500Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Sep 15 03:41:11 cert-options-20210915033619-22140 dockerd[780]: time="2021-09-15T03:41:11.957223300Z" level=info msg="Loading containers: start."
	Sep 15 03:41:13 cert-options-20210915033619-22140 dockerd[780]: time="2021-09-15T03:41:13.331752000Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 15 03:41:13 cert-options-20210915033619-22140 dockerd[780]: time="2021-09-15T03:41:13.737835900Z" level=info msg="Loading containers: done."
	Sep 15 03:41:14 cert-options-20210915033619-22140 dockerd[780]: time="2021-09-15T03:41:14.150061100Z" level=info msg="Docker daemon" commit=75249d8 graphdriver(s)=overlay2 version=20.10.8
	Sep 15 03:41:14 cert-options-20210915033619-22140 dockerd[780]: time="2021-09-15T03:41:14.150232300Z" level=info msg="Daemon has completed initialization"
	Sep 15 03:41:14 cert-options-20210915033619-22140 systemd[1]: Started Docker Application Container Engine.
	Sep 15 03:41:14 cert-options-20210915033619-22140 dockerd[780]: time="2021-09-15T03:41:14.534127900Z" level=info msg="API listen on [::]:2376"
	Sep 15 03:41:14 cert-options-20210915033619-22140 dockerd[780]: time="2021-09-15T03:41:14.595268200Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 15 03:42:57 cert-options-20210915033619-22140 dockerd[780]: time="2021-09-15T03:42:57.181207300Z" level=info msg="ignoring event" container=209e058816f3e49fc64171a45b298977af28682c787026cbd095239584e8cbeb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	e5996f04b8a75       8d147537fb7d1       6 seconds ago        Running             coredns                   0                   f33854740b76e
	a6913a4c819db       6e38f40d628db       18 seconds ago       Running             storage-provisioner       0                   06287b4414aca
	fcbd6374c205c       36c4ebbc9d979       24 seconds ago       Running             kube-proxy                0                   709551a17e4e4
	62082c6315ab4       6e002eb89a881       About a minute ago   Running             kube-controller-manager   1                   048cf9938fe0e
	70217788e6600       0048118155842       2 minutes ago        Running             etcd                      0                   1fdf230e30351
	f1ec2d6b500ff       aca5ededae9c8       2 minutes ago        Running             kube-scheduler            0                   048e90480b461
	209e058816f3e       6e002eb89a881       2 minutes ago        Exited              kube-controller-manager   0                   048cf9938fe0e
	800b3d462925d       f30469a2491a5       2 minutes ago        Running             kube-apiserver            0                   4e27824996523
	
	* 
	* ==> describe nodes <==
	* Name:               cert-options-20210915033619-22140
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=cert-options-20210915033619-22140
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7d234465a435c40d154c10f5ac847cc10f4e5fc3
	                    minikube.k8s.io/name=cert-options-20210915033619-22140
	                    minikube.k8s.io/updated_at=2021_09_15T03_43_17_0700
	                    minikube.k8s.io/version=v1.23.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 15 Sep 2021 03:42:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  cert-options-20210915033619-22140
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 15 Sep 2021 03:44:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 15 Sep 2021 03:44:00 +0000   Wed, 15 Sep 2021 03:42:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 15 Sep 2021 03:44:00 +0000   Wed, 15 Sep 2021 03:42:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 15 Sep 2021 03:44:00 +0000   Wed, 15 Sep 2021 03:42:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 15 Sep 2021 03:44:00 +0000   Wed, 15 Sep 2021 03:44:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    cert-options-20210915033619-22140
	Capacity:
	  cpu:                4
	  ephemeral-storage:  65792556Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             20481980Ki
	  pods:               110
	Allocatable:
	  cpu:                4
	  ephemeral-storage:  65792556Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             20481980Ki
	  pods:               110
	System Info:
	  Machine ID:                 4b5e5cdd53d44f5ab575bb522d42acca
	  System UUID:                5e7d2334-11f4-4d32-a0ea-eef70b1c7dc1
	  Boot ID:                    31a72c78-717c-4979-9c6b-d3a794aac31d
	  Kernel Version:             4.19.121-linuxkit
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.8
	  Kubelet Version:            v1.22.1
	  Kube-Proxy Version:         v1.22.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-78fcd69978-cb8sj                                     100m (2%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     62s
	  kube-system                 etcd-cert-options-20210915033619-22140                       100m (2%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         50s
	  kube-system                 kube-apiserver-cert-options-20210915033619-22140             250m (6%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         50s
	  kube-system                 kube-controller-manager-cert-options-20210915033619-22140    200m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	  kube-system                 kube-proxy-wmzsf                                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	  kube-system                 kube-scheduler-cert-options-20210915033619-22140             100m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         103s
	  kube-system                 storage-provisioner                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (18%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (0%!)(MISSING)  170Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From     Message
	  ----    ------                   ----  ----     -------
	  Normal  Starting                 79s   kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  74s   kubelet  Node cert-options-20210915033619-22140 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    74s   kubelet  Node cert-options-20210915033619-22140 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     74s   kubelet  Node cert-options-20210915033619-22140 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             70s   kubelet  Node cert-options-20210915033619-22140 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  53s   kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeReady                48s   kubelet  Node cert-options-20210915033619-22140 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.000002]  ? ktime_get_update_offsets_now+0x36/0x95
	[  +0.000002]  hrtimer_interrupt+0x92/0x165
	[  +0.000003]  hv_stimer0_isr+0x20/0x2d
	[  +0.000007]  hv_stimer0_vector_handler+0x3b/0x57
	[  +0.000009]  hv_stimer0_callback_vector+0xf/0x20
	[  +0.000001]  </IRQ>
	[  +0.000001] RIP: 0010:native_safe_halt+0x7/0x8
	[  +0.000002] Code: 60 02 df f0 83 44 24 fc 00 48 8b 00 a8 08 74 0b 65 81 25 fd b5 6f 69 ff ff ff 7f c3 e8 77 ce 72 ff f4 c3 e8 70 ce 72 ff fb f4 <c3> 0f 1f 44 00 00 53 e8 f1 f5 81 ff 65 8b 35 b3 4b 6f 69 31 ff e8
	[  +0.000001] RSP: 0018:ffff98b6000a3ec8 EFLAGS: 00000246 ORIG_RAX: ffffffffffffff12
	[  +0.000001] RAX: ffffffff9691a410 RBX: 0000000000000001 RCX: ffffffff97253150
	[  +0.000001] RDX: 00000000001bfb3e RSI: 0000000000000001 RDI: 0000000000000001
	[  +0.000001] RBP: 0000000000000000 R08: 011cf099150136ab R09: 0000000000000002
	[  +0.000000] R10: ffff8b9f6df73938 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: ffff8b9fae19e1c0 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000002]  ? ldsem_down_write+0x1da/0x1da
	[  +0.000009]  ? native_safe_halt+0x5/0x8
	[  +0.000001]  default_idle+0x1b/0x2c
	[  +0.000001]  do_idle+0xe5/0x216
	[  +0.000002]  cpu_startup_entry+0x6f/0x71
	[  +0.000003]  start_secondary+0x18e/0x1a9
	[  +0.000006]  secondary_startup_64+0xa4/0xb0
	[  +0.000005] ---[ end trace f027fbf82db24e21 ]---
	[Sep15 03:23] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000013] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Sep15 03:36] tee (174639): /proc/173352/oom_adj is deprecated, please use /proc/173352/oom_score_adj instead.
	
	* 
	* ==> etcd [70217788e660] <==
	* {"level":"info","ts":"2021-09-15T03:43:44.629Z","caller":"traceutil/trace.go:171","msg":"trace[1872656818] transaction","detail":"{read_only:false; response_revision:380; number_of_response:1; }","duration":"148.3519ms","start":"2021-09-15T03:43:44.481Z","end":"2021-09-15T03:43:44.629Z","steps":["trace[1872656818] 'compare'  (duration: 29.9237ms)"],"step_count":1}
	{"level":"info","ts":"2021-09-15T03:43:44.739Z","caller":"traceutil/trace.go:171","msg":"trace[428826018] transaction","detail":"{read_only:false; response_revision:381; number_of_response:1; }","duration":"158.2293ms","start":"2021-09-15T03:43:44.580Z","end":"2021-09-15T03:43:44.738Z","steps":["trace[428826018] 'process raft request'  (duration: 67.3882ms)","trace[428826018] 'compare'  (duration: 83.0693ms)"],"step_count":2}
	{"level":"info","ts":"2021-09-15T03:43:46.384Z","caller":"traceutil/trace.go:171","msg":"trace[817616794] transaction","detail":"{read_only:false; response_revision:400; number_of_response:1; }","duration":"112.8631ms","start":"2021-09-15T03:43:46.271Z","end":"2021-09-15T03:43:46.384Z","steps":["trace[817616794] 'process raft request'  (duration: 22.986ms)","trace[817616794] 'compare'  (duration: 83.2916ms)"],"step_count":2}
	{"level":"info","ts":"2021-09-15T03:43:46.578Z","caller":"traceutil/trace.go:171","msg":"trace[655274222] transaction","detail":"{read_only:false; response_revision:401; number_of_response:1; }","duration":"114.4558ms","start":"2021-09-15T03:43:46.464Z","end":"2021-09-15T03:43:46.578Z","steps":["trace[655274222] 'compare'  (duration: 87.4001ms)"],"step_count":1}
	{"level":"info","ts":"2021-09-15T03:43:46.585Z","caller":"traceutil/trace.go:171","msg":"trace[942925476] transaction","detail":"{read_only:false; response_revision:402; number_of_response:1; }","duration":"112.1478ms","start":"2021-09-15T03:43:46.473Z","end":"2021-09-15T03:43:46.585Z","steps":["trace[942925476] 'process raft request'  (duration: 86.8865ms)"],"step_count":1}
	{"level":"warn","ts":"2021-09-15T03:43:46.647Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"167.0023ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas/default/\" range_end:\"/registry/resourcequotas/default0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2021-09-15T03:43:46.647Z","caller":"traceutil/trace.go:171","msg":"trace[2069424389] range","detail":"{range_begin:/registry/resourcequotas/default/; range_end:/registry/resourcequotas/default0; response_count:0; response_revision:403; }","duration":"167.1423ms","start":"2021-09-15T03:43:46.480Z","end":"2021-09-15T03:43:46.647Z","steps":["trace[2069424389] 'agreement among raft nodes before linearized reading'  (duration: 166.9443ms)"],"step_count":1}
	{"level":"warn","ts":"2021-09-15T03:43:46.648Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"127.7203ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:3646"}
	{"level":"info","ts":"2021-09-15T03:43:46.648Z","caller":"traceutil/trace.go:171","msg":"trace[1777560954] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:403; }","duration":"127.7664ms","start":"2021-09-15T03:43:46.521Z","end":"2021-09-15T03:43:46.648Z","steps":["trace[1777560954] 'agreement among raft nodes before linearized reading'  (duration: 127.6726ms)"],"step_count":1}
	{"level":"info","ts":"2021-09-15T03:43:46.906Z","caller":"traceutil/trace.go:171","msg":"trace[1329096700] transaction","detail":"{read_only:false; response_revision:406; number_of_response:1; }","duration":"180.3744ms","start":"2021-09-15T03:43:46.726Z","end":"2021-09-15T03:43:46.906Z","steps":["trace[1329096700] 'process raft request'  (duration: 170.8763ms)"],"step_count":1}
	{"level":"info","ts":"2021-09-15T03:43:46.918Z","caller":"traceutil/trace.go:171","msg":"trace[1318333876] transaction","detail":"{read_only:false; response_revision:405; number_of_response:1; }","duration":"192.1033ms","start":"2021-09-15T03:43:46.725Z","end":"2021-09-15T03:43:46.918Z","steps":["trace[1318333876] 'process raft request'  (duration: 170.7883ms)"],"step_count":1}
	{"level":"info","ts":"2021-09-15T03:43:47.337Z","caller":"traceutil/trace.go:171","msg":"trace[192655157] transaction","detail":"{read_only:false; response_revision:420; number_of_response:1; }","duration":"102.2212ms","start":"2021-09-15T03:43:47.235Z","end":"2021-09-15T03:43:47.337Z","steps":["trace[192655157] 'process raft request'  (duration: 50.7742ms)"],"step_count":1}
	{"level":"info","ts":"2021-09-15T03:43:47.346Z","caller":"traceutil/trace.go:171","msg":"trace[150370285] transaction","detail":"{read_only:false; response_revision:421; number_of_response:1; }","duration":"110.6456ms","start":"2021-09-15T03:43:47.235Z","end":"2021-09-15T03:43:47.346Z","steps":["trace[150370285] 'process raft request'  (duration: 100.515ms)"],"step_count":1}
	{"level":"info","ts":"2021-09-15T03:43:58.941Z","caller":"traceutil/trace.go:171","msg":"trace[245569927] transaction","detail":"{read_only:false; response_revision:435; number_of_response:1; }","duration":"116.1234ms","start":"2021-09-15T03:43:58.824Z","end":"2021-09-15T03:43:58.941Z","steps":["trace[245569927] 'process raft request'  (duration: 40.7001ms)"],"step_count":1}
	{"level":"info","ts":"2021-09-15T03:43:58.960Z","caller":"traceutil/trace.go:171","msg":"trace[54918223] linearizableReadLoop","detail":"{readStateIndex:450; appliedIndex:450; }","duration":"100.8414ms","start":"2021-09-15T03:43:58.859Z","end":"2021-09-15T03:43:58.960Z","steps":["trace[54918223] 'read index received'  (duration: 100.8295ms)","trace[54918223] 'applied index is now lower than readState.Index'  (duration: 10.3µs)"],"step_count":2}
	{"level":"warn","ts":"2021-09-15T03:43:59.039Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"180.4229ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/kube-proxy\" ","response":"range_response_count:1 size:226"}
	{"level":"info","ts":"2021-09-15T03:43:59.039Z","caller":"traceutil/trace.go:171","msg":"trace[1832696591] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kube-proxy; range_end:; response_count:1; response_revision:435; }","duration":"180.6791ms","start":"2021-09-15T03:43:58.859Z","end":"2021-09-15T03:43:59.039Z","steps":["trace[1832696591] 'agreement among raft nodes before linearized reading'  (duration: 101.2687ms)","trace[1832696591] 'range keys from in-memory index tree'  (duration: 79.2337ms)"],"step_count":2}
	{"level":"info","ts":"2021-09-15T03:44:01.118Z","caller":"traceutil/trace.go:171","msg":"trace[1296453845] linearizableReadLoop","detail":"{readStateIndex:459; appliedIndex:459; }","duration":"117.66ms","start":"2021-09-15T03:44:01.000Z","end":"2021-09-15T03:44:01.117Z","steps":["trace[1296453845] 'read index received'  (duration: 117.6488ms)","trace[1296453845] 'applied index is now lower than readState.Index'  (duration: 9.3µs)"],"step_count":2}
	{"level":"warn","ts":"2021-09-15T03:44:01.136Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"136.4855ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-cert-options-20210915033619-22140\" ","response":"range_response_count:1 size:4357"}
	{"level":"info","ts":"2021-09-15T03:44:01.136Z","caller":"traceutil/trace.go:171","msg":"trace[2127601156] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-cert-options-20210915033619-22140; range_end:; response_count:1; response_revision:443; }","duration":"136.6165ms","start":"2021-09-15T03:44:01.000Z","end":"2021-09-15T03:44:01.136Z","steps":["trace[2127601156] 'agreement among raft nodes before linearized reading'  (duration: 118.1878ms)","trace[2127601156] 'range keys from in-memory index tree'  (duration: 18.2211ms)"],"step_count":2}
	{"level":"info","ts":"2021-09-15T03:44:01.145Z","caller":"traceutil/trace.go:171","msg":"trace[1799665743] transaction","detail":"{read_only:false; response_revision:444; number_of_response:1; }","duration":"121.4429ms","start":"2021-09-15T03:44:01.023Z","end":"2021-09-15T03:44:01.145Z","steps":["trace[1799665743] 'process raft request'  (duration: 94.419ms)","trace[1799665743] 'compare'  (duration: 17.9865ms)"],"step_count":2}
	{"level":"info","ts":"2021-09-15T03:44:18.759Z","caller":"traceutil/trace.go:171","msg":"trace[1036487153] transaction","detail":"{read_only:false; response_revision:466; number_of_response:1; }","duration":"135.1351ms","start":"2021-09-15T03:44:18.624Z","end":"2021-09-15T03:44:18.759Z","steps":["trace[1036487153] 'process raft request'  (duration: 111.8141ms)"],"step_count":1}
	{"level":"info","ts":"2021-09-15T03:44:20.928Z","caller":"traceutil/trace.go:171","msg":"trace[1945002690] linearizableReadLoop","detail":"{readStateIndex:490; appliedIndex:490; }","duration":"120.8154ms","start":"2021-09-15T03:44:20.807Z","end":"2021-09-15T03:44:20.928Z","steps":["trace[1945002690] 'read index received'  (duration: 120.8037ms)","trace[1945002690] 'applied index is now lower than readState.Index'  (duration: 9.7µs)"],"step_count":2}
	{"level":"warn","ts":"2021-09-15T03:44:20.987Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"178.9965ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-cert-options-20210915033619-22140\" ","response":"range_response_count:1 size:4548"}
	{"level":"info","ts":"2021-09-15T03:44:20.987Z","caller":"traceutil/trace.go:171","msg":"trace[586625846] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-cert-options-20210915033619-22140; range_end:; response_count:1; response_revision:467; }","duration":"179.7594ms","start":"2021-09-15T03:44:20.807Z","end":"2021-09-15T03:44:20.987Z","steps":["trace[586625846] 'agreement among raft nodes before linearized reading'  (duration: 121.0952ms)","trace[586625846] 'range keys from in-memory index tree'  (duration: 57.9806ms)"],"step_count":2}
	
	* 
	* ==> kernel <==
	*  03:44:50 up  2:21,  0 users,  load average: 60.62, 35.04, 24.65
	Linux cert-options-20210915033619-22140 4.19.121-linuxkit #1 SMP Thu Jan 21 15:36:34 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [800b3d462925] <==
	* Trace[1700767156]: ---"Object stored in database" 683ms (03:43:44.946)
	Trace[1700767156]: [822.1098ms] [822.1098ms] END
	I0915 03:43:44.999600       1 trace.go:205] Trace[866074677]: "Update" url:/api/v1/namespaces/kube-system/pods/kube-scheduler-cert-options-20210915033619-22140/status,user-agent:kube-controller-manager/v1.22.1 (linux/amd64) kubernetes/632ed30/system:serviceaccount:kube-system:node-controller,audit-id:e1e5c7c9-1407-4188-8085-79172a6973fb,client:192.168.58.2,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (15-Sep-2021 03:43:44.335) (total time: 660ms):
	Trace[866074677]: ---"Object stored in database" 602ms (03:43:44.938)
	Trace[866074677]: [660.6894ms] [660.6894ms] END
	I0915 03:43:45.204085       1 trace.go:205] Trace[632892554]: "Get" url:/api/v1/namespaces/kube-system/serviceaccounts/bootstrap-signer,user-agent:kube-controller-manager/v1.22.1 (linux/amd64) kubernetes/632ed30/kube-controller-manager,audit-id:d45996e3-d83c-44a0-bdea-f8dde225f0cf,client:192.168.58.2,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (15-Sep-2021 03:43:44.682) (total time: 521ms):
	Trace[632892554]: ---"About to write a response" 521ms (03:43:45.203)
	Trace[632892554]: [521.9813ms] [521.9813ms] END
	I0915 03:43:45.749795       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0915 03:43:46.106808       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0915 03:43:46.852925       1 trace.go:205] Trace[325309241]: "Create" url:/api/v1/namespaces/kube-system/pods,user-agent:kube-controller-manager/v1.22.1 (linux/amd64) kubernetes/632ed30/system:serviceaccount:kube-system:daemon-set-controller,audit-id:1c4f647f-490d-4891-b85f-075ac6366db2,client:192.168.58.2,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (15-Sep-2021 03:43:46.311) (total time: 541ms):
	Trace[325309241]: ---"Object stored in database" 516ms (03:43:46.852)
	Trace[325309241]: [541.7425ms] [541.7425ms] END
	I0915 03:43:46.908967       1 trace.go:205] Trace[2135736814]: "Create" url:/api/v1/namespaces/default/serviceaccounts,user-agent:kube-controller-manager/v1.22.1 (linux/amd64) kubernetes/632ed30/system:serviceaccount:kube-system:service-account-controller,audit-id:17f7d653-edbd-4ae4-9f8b-7eb5125141bb,client:192.168.58.2,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (15-Sep-2021 03:43:46.309) (total time: 599ms):
	Trace[2135736814]: ---"Object stored in database" 599ms (03:43:46.908)
	Trace[2135736814]: [599.6866ms] [599.6866ms] END
	I0915 03:43:46.967776       1 trace.go:205] Trace[399309015]: "Create" url:/api/v1/namespaces/default/configmaps,user-agent:kube-controller-manager/v1.22.1 (linux/amd64) kubernetes/632ed30/system:serviceaccount:kube-system:root-ca-cert-publisher,audit-id:c42b8a12-279a-4976-be00-0f3b942f9cff,client:192.168.58.2,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (15-Sep-2021 03:43:46.404) (total time: 563ms):
	Trace[399309015]: ---"Object stored in database" 562ms (03:43:46.967)
	Trace[399309015]: [563.0785ms] [563.0785ms] END
	I0915 03:43:59.047325       1 trace.go:205] Trace[558591707]: "Create" url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.22.1 (linux/amd64) kubernetes/632ed30,audit-id:f494911e-202a-41f1-8ed6-6fdb546ec780,client:192.168.58.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (15-Sep-2021 03:43:58.055) (total time: 991ms):
	Trace[558591707]: ---"Object stored in database" 943ms (03:43:59.020)
	Trace[558591707]: [991.2937ms] [991.2937ms] END
	I0915 03:43:59.067549       1 trace.go:205] Trace[1890453055]: "Create" url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.22.1 (linux/amd64) kubernetes/632ed30,audit-id:b5e17540-7842-4096-960f-a4b2843ff538,client:192.168.58.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (15-Sep-2021 03:43:58.119) (total time: 908ms):
	Trace[1890453055]: ---"Object stored in database" 897ms (03:43:59.027)
	Trace[1890453055]: [908.0016ms] [908.0016ms] END
	
	* 
	* ==> kube-controller-manager [209e058816f3] <==
	* 	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00037e490, 0x5175b80, 0xc000ce0120, 0x4c62201, 0xc000094360)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0x9b
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00037e490, 0x3b9aca00, 0x0, 0x1, 0xc000094360)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc00037e490, 0x3b9aca00, 0xc000094360)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d
	created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicServingCertificateController).Run
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/tlsconfig.go:247 +0x1d2
	
	goroutine 180 [select]:
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00037e530, 0x5175b80, 0xc000ce00f0, 0x1, 0xc000094360)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x118
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00037e530, 0xdf8475800, 0x0, 0x1, 0xc000094360)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc00037e530, 0xdf8475800, 0xc000094360)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d
	created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicServingCertificateController).Run
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/tlsconfig.go:250 +0x24b
	
	goroutine 155 [runnable]:
	net/http.setRequestCancel.func4(0x0, 0xc000aadc20, 0xc0002e75e0, 0xc000c9860c, 0xc00043b260)
		/usr/local/go/src/net/http/client.go:397 +0x96
	created by net/http.setRequestCancel
		/usr/local/go/src/net/http/client.go:396 +0x337
	
	* 
	* ==> kube-controller-manager [62082c6315ab] <==
	* I0915 03:43:43.959939       1 event.go:291] "Event occurred" object="cert-options-20210915033619-22140" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node cert-options-20210915033619-22140 event: Registered Node cert-options-20210915033619-22140 in Controller"
	I0915 03:43:43.999092       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	I0915 03:43:44.002191       1 shared_informer.go:247] Caches are synced for namespace 
	I0915 03:43:44.031423       1 shared_informer.go:247] Caches are synced for disruption 
	I0915 03:43:44.031489       1 disruption.go:371] Sending events to api server.
	I0915 03:43:44.059934       1 shared_informer.go:247] Caches are synced for deployment 
	I0915 03:43:44.575453       1 shared_informer.go:247] Caches are synced for crt configmap 
	I0915 03:43:44.604483       1 shared_informer.go:247] Caches are synced for stateful set 
	I0915 03:43:44.610464       1 shared_informer.go:247] Caches are synced for bootstrap_signer 
	I0915 03:43:44.611164       1 shared_informer.go:247] Caches are synced for expand 
	I0915 03:43:44.677434       1 shared_informer.go:247] Caches are synced for PVC protection 
	I0915 03:43:44.692758       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0915 03:43:44.721033       1 shared_informer.go:247] Caches are synced for resource quota 
	I0915 03:43:44.721079       1 shared_informer.go:247] Caches are synced for attach detach 
	I0915 03:43:44.732398       1 shared_informer.go:247] Caches are synced for resource quota 
	I0915 03:43:44.732501       1 shared_informer.go:247] Caches are synced for ephemeral 
	I0915 03:43:44.927059       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0915 03:43:45.014736       1 event.go:291] "Event occurred" object="kube-system/kube-scheduler-cert-options-20210915033619-22140" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0915 03:43:45.541671       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0915 03:43:45.562920       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0915 03:43:45.562947       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0915 03:43:45.912912       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-78fcd69978 to 1"
	I0915 03:43:46.864169       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-wmzsf"
	I0915 03:43:46.877071       1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-78fcd69978-cb8sj"
	I0915 03:44:03.987780       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	
	* 
	* ==> kube-proxy [fcbd6374c205] <==
	* I0915 03:44:33.599531       1 node.go:172] Successfully retrieved node IP: 192.168.58.2
	I0915 03:44:33.599880       1 server_others.go:140] Detected node IP 192.168.58.2
	W0915 03:44:33.601197       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
	I0915 03:44:35.281392       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0915 03:44:35.283106       1 server_others.go:212] Using iptables Proxier.
	I0915 03:44:35.283149       1 server_others.go:219] creating dualStackProxier for iptables.
	W0915 03:44:35.283205       1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0915 03:44:35.323273       1 server.go:649] Version: v1.22.1
	I0915 03:44:35.339505       1 config.go:315] Starting service config controller
	I0915 03:44:35.340275       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0915 03:44:35.361464       1 config.go:224] Starting endpoint slice config controller
	I0915 03:44:35.361543       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	E0915 03:44:35.477163       1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"cert-options-20210915033619-22140.16a4e204d2f9c4a4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc04878e8d56da3d4, ext:6002180301, loc:(*time.Location)(0x2d81340)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-cert-options-20210915033619-22140", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:""
, Name:"cert-options-20210915033619-22140", UID:"cert-options-20210915033619-22140", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "cert-options-20210915033619-22140.16a4e204d2f9c4a4" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
	I0915 03:44:35.490660       1 shared_informer.go:247] Caches are synced for service config 
	I0915 03:44:35.563119       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [f1ec2d6b500f] <==
	* E0915 03:42:58.553515       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0915 03:42:58.583680       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0915 03:42:58.828151       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0915 03:42:58.927033       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0915 03:42:59.194742       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0915 03:42:59.309263       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0915 03:42:59.554069       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0915 03:42:59.595037       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0915 03:42:59.750736       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0915 03:42:59.939063       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0915 03:43:00.075850       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0915 03:43:00.116749       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0915 03:43:00.186781       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0915 03:43:00.206629       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0915 03:43:00.260279       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0915 03:43:03.282048       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0915 03:43:03.305837       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0915 03:43:03.776002       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0915 03:43:03.809911       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0915 03:43:03.975960       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0915 03:43:04.520884       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0915 03:43:05.123759       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0915 03:43:12.158568       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0915 03:43:12.158635       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	I0915 03:43:16.979617       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2021-09-15 03:36:46 UTC, end at Wed 2021-09-15 03:44:59 UTC. --
	Sep 15 03:43:58 cert-options-20210915033619-22140 kubelet[2762]: I0915 03:43:58.171948    2762 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/adf02b8889175b1dced2de1cfa857d0a-ca-certs\") pod \"kube-controller-manager-cert-options-20210915033619-22140\" (UID: \"adf02b8889175b1dced2de1cfa857d0a\") "
	Sep 15 03:43:58 cert-options-20210915033619-22140 kubelet[2762]: I0915 03:43:58.172496    2762 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/adf02b8889175b1dced2de1cfa857d0a-flexvolume-dir\") pod \"kube-controller-manager-cert-options-20210915033619-22140\" (UID: \"adf02b8889175b1dced2de1cfa857d0a\") "
	Sep 15 03:43:58 cert-options-20210915033619-22140 kubelet[2762]: I0915 03:43:58.172560    2762 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/adf02b8889175b1dced2de1cfa857d0a-k8s-certs\") pod \"kube-controller-manager-cert-options-20210915033619-22140\" (UID: \"adf02b8889175b1dced2de1cfa857d0a\") "
	Sep 15 03:43:58 cert-options-20210915033619-22140 kubelet[2762]: I0915 03:43:58.172602    2762 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f6e7d881d17f6d944d2a54576503290f-kubeconfig\") pod \"kube-scheduler-cert-options-20210915033619-22140\" (UID: \"f6e7d881d17f6d944d2a54576503290f\") "
	Sep 15 03:43:58 cert-options-20210915033619-22140 kubelet[2762]: I0915 03:43:58.172640    2762 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/8d680e4b34ed251e9091ff208324aae5-etcd-data\") pod \"etcd-cert-options-20210915033619-22140\" (UID: \"8d680e4b34ed251e9091ff208324aae5\") "
	Sep 15 03:43:58 cert-options-20210915033619-22140 kubelet[2762]: I0915 03:43:58.172700    2762 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4eeccf244425bf6c5c6b6aeb52bab502-usr-local-share-ca-certificates\") pod \"kube-apiserver-cert-options-20210915033619-22140\" (UID: \"4eeccf244425bf6c5c6b6aeb52bab502\") "
	Sep 15 03:43:58 cert-options-20210915033619-22140 kubelet[2762]: I0915 03:43:58.172760    2762 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/adf02b8889175b1dced2de1cfa857d0a-usr-share-ca-certificates\") pod \"kube-controller-manager-cert-options-20210915033619-22140\" (UID: \"adf02b8889175b1dced2de1cfa857d0a\") "
	Sep 15 03:43:58 cert-options-20210915033619-22140 kubelet[2762]: I0915 03:43:58.172781    2762 reconciler.go:157] "Reconciler: start to sync state"
	Sep 15 03:44:01 cert-options-20210915033619-22140 kubelet[2762]: I0915 03:44:01.040430    2762 status_manager.go:276] "Container startup changed before pod has synced" pod="kube-system/kube-apiserver-cert-options-20210915033619-22140" containerID="docker://800b3d462925d79f6e719f9dfdf286da8a477ec6b9c9fa9341eb375f8b9fce68"
	Sep 15 03:44:02 cert-options-20210915033619-22140 kubelet[2762]: I0915 03:44:02.939986    2762 status_manager.go:276] "Container startup changed before pod has synced" pod="kube-system/etcd-cert-options-20210915033619-22140" containerID="docker://70217788e6600e896d2240c58da010916e65b659ed42c715c6c5f4ce67b6647f"
	Sep 15 03:44:04 cert-options-20210915033619-22140 kubelet[2762]: I0915 03:44:04.714326    2762 topology_manager.go:200] "Topology Admit Handler"
	Sep 15 03:44:04 cert-options-20210915033619-22140 kubelet[2762]: I0915 03:44:04.758575    2762 topology_manager.go:200] "Topology Admit Handler"
	Sep 15 03:44:04 cert-options-20210915033619-22140 kubelet[2762]: I0915 03:44:04.809911    2762 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjz2c\" (UniqueName: \"kubernetes.io/projected/42ce7029-cc6b-4bb9-b68b-befa68dc6449-kube-api-access-tjz2c\") pod \"storage-provisioner\" (UID: \"42ce7029-cc6b-4bb9-b68b-befa68dc6449\") "
	Sep 15 03:44:04 cert-options-20210915033619-22140 kubelet[2762]: I0915 03:44:04.845495    2762 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8jxv\" (UniqueName: \"kubernetes.io/projected/6c354c34-6a2e-45c2-ab92-5442b51a8b82-kube-api-access-k8jxv\") pod \"coredns-78fcd69978-cb8sj\" (UID: \"6c354c34-6a2e-45c2-ab92-5442b51a8b82\") "
	Sep 15 03:44:04 cert-options-20210915033619-22140 kubelet[2762]: I0915 03:44:04.850475    2762 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/42ce7029-cc6b-4bb9-b68b-befa68dc6449-tmp\") pod \"storage-provisioner\" (UID: \"42ce7029-cc6b-4bb9-b68b-befa68dc6449\") "
	Sep 15 03:44:04 cert-options-20210915033619-22140 kubelet[2762]: I0915 03:44:04.850663    2762 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6c354c34-6a2e-45c2-ab92-5442b51a8b82-config-volume\") pod \"coredns-78fcd69978-cb8sj\" (UID: \"6c354c34-6a2e-45c2-ab92-5442b51a8b82\") "
	Sep 15 03:44:05 cert-options-20210915033619-22140 kubelet[2762]: W0915 03:44:05.214527    2762 container.go:586] Failed to update stats for container "/kubepods/besteffort/pod42ce7029-cc6b-4bb9-b68b-befa68dc6449": /sys/fs/cgroup/cpuset/kubepods/besteffort/pod42ce7029-cc6b-4bb9-b68b-befa68dc6449/cpuset.mems found to be empty, continuing to push stats
	Sep 15 03:44:15 cert-options-20210915033619-22140 kubelet[2762]: I0915 03:44:15.185480    2762 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="709551a17e4e44b25190314cb5a266090fe67404accc0dc6eab5522ebd4db870"
	Sep 15 03:44:15 cert-options-20210915033619-22140 kubelet[2762]: E0915 03:44:15.746344    2762 kubelet.go:1701] "Failed creating a mirror pod for" err="pods \"kube-apiserver-cert-options-20210915033619-22140\" already exists" pod="kube-system/kube-apiserver-cert-options-20210915033619-22140"
	Sep 15 03:44:16 cert-options-20210915033619-22140 kubelet[2762]: E0915 03:44:16.316020    2762 kubelet.go:1701] "Failed creating a mirror pod for" err="pods \"etcd-cert-options-20210915033619-22140\" already exists" pod="kube-system/etcd-cert-options-20210915033619-22140"
	Sep 15 03:44:26 cert-options-20210915033619-22140 kubelet[2762]: I0915 03:44:26.331924    2762 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="06287b4414aca5470da58f36fe4523dcbcdb23cb5a31cb4e7544957b1d7186c0"
	Sep 15 03:44:37 cert-options-20210915033619-22140 kubelet[2762]: I0915 03:44:37.982673    2762 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-78fcd69978-cb8sj through plugin: invalid network status for"
	Sep 15 03:44:38 cert-options-20210915033619-22140 kubelet[2762]: I0915 03:44:38.703361    2762 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="f33854740b76eafcaf6db52e0c195ffa05d12d5ecc0f80c1d52820dd2ce0b921"
	Sep 15 03:44:40 cert-options-20210915033619-22140 kubelet[2762]: I0915 03:44:40.904066    2762 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-78fcd69978-cb8sj through plugin: invalid network status for"
	Sep 15 03:44:45 cert-options-20210915033619-22140 kubelet[2762]: I0915 03:44:45.879712    2762 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-78fcd69978-cb8sj through plugin: invalid network status for"
	
	* 
	* ==> storage-provisioner [a6913a4c819d] <==
	* I0915 03:44:40.517399       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Executing "docker container inspect cert-options-20210915033619-22140 --format={{.State.Status}}" took an unusually long time: 2.3237132s
	* Restarting the docker service may improve performance.

                                                
                                                
** /stderr **
helpers_test.go:255: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p cert-options-20210915033619-22140 -n cert-options-20210915033619-22140
helpers_test.go:255: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p cert-options-20210915033619-22140 -n cert-options-20210915033619-22140: (6.9896078s)
helpers_test.go:262: (dbg) Run:  kubectl --context cert-options-20210915033619-22140 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:262: (dbg) Done: kubectl --context cert-options-20210915033619-22140 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: (1.0803355s)
helpers_test.go:271: non-running pods: 
helpers_test.go:273: ======> post-mortem[TestCertOptions]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context cert-options-20210915033619-22140 describe pod 
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context cert-options-20210915033619-22140 describe pod : exit status 1 (256.3534ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context cert-options-20210915033619-22140 describe pod : exit status 1
helpers_test.go:176: Cleaning up "cert-options-20210915033619-22140" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-20210915033619-22140

                                                
                                                
=== CONT  TestCertOptions
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-20210915033619-22140: (33.2657684s)
--- FAIL: TestCertOptions (562.68s)

                                                
                                    
x
+
TestFunctional/parallel/LoadImageFromFile (47.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/LoadImageFromFile
=== PAUSE TestFunctional/parallel/LoadImageFromFile

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImageFromFile
functional_test.go:281: (dbg) Run:  docker pull busybox:1.31
functional_test.go:281: (dbg) Done: docker pull busybox:1.31: (3.6168905s)
functional_test.go:288: (dbg) Run:  docker tag busybox:1.31 docker.io/library/busybox:load-from-file-functional-20210915015618-22140

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImageFromFile
functional_test.go:295: (dbg) Run:  docker save -o busybox-load.tar docker.io/library/busybox:load-from-file-functional-20210915015618-22140

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImageFromFile
functional_test.go:295: (dbg) Done: docker save -o busybox-load.tar docker.io/library/busybox:load-from-file-functional-20210915015618-22140: (1.1094347s)
functional_test.go:306: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915015618-22140 image load C:\jenkins\workspace\Docker_Windows_integration\busybox-load.tar

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImageFromFile
functional_test.go:306: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915015618-22140 image load C:\jenkins\workspace\Docker_Windows_integration\busybox-load.tar: (8.4956011s)
functional_test.go:308: loading image into minikube: <nil>

                                                
                                                
** stderr ** 
	! Executing "docker container inspect functional-20210915015618-22140 --format={{.State.Status}}" took an unusually long time: 2.2407818s
	* Restarting the docker service may improve performance.

                                                
                                                
** /stderr **
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestFunctional/parallel/LoadImageFromFile]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect functional-20210915015618-22140

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImageFromFile
helpers_test.go:236: (dbg) docker inspect functional-20210915015618-22140:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3327eb4f9ec0793e2e65268021110864a5b85b29afd8583a186e889d71d71b05",
	        "Created": "2021-09-15T01:56:31.7050932Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 27259,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-09-15T01:56:33.4814723Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:83b5a81388468b1ffcd3874b4f24c1406c63c33ac07797cc8bed6ad0207d36a8",
	        "ResolvConfPath": "/var/lib/docker/containers/3327eb4f9ec0793e2e65268021110864a5b85b29afd8583a186e889d71d71b05/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3327eb4f9ec0793e2e65268021110864a5b85b29afd8583a186e889d71d71b05/hostname",
	        "HostsPath": "/var/lib/docker/containers/3327eb4f9ec0793e2e65268021110864a5b85b29afd8583a186e889d71d71b05/hosts",
	        "LogPath": "/var/lib/docker/containers/3327eb4f9ec0793e2e65268021110864a5b85b29afd8583a186e889d71d71b05/3327eb4f9ec0793e2e65268021110864a5b85b29afd8583a186e889d71d71b05-json.log",
	        "Name": "/functional-20210915015618-22140",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-20210915015618-22140:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-20210915015618-22140",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4194304000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/832ca370b586d0f789f990478492352b903c0ef1c8df2738e208181b0594ec79-init/diff:/var/lib/docker/overlay2/81b5ed92bfb1e2a2a0e307c706b587bea810390dd4cdeffdaab53cb2bea532a6/diff:/var/lib/docker/overlay2/9560b70ae747eb38506ca99f7bdf1b19d69a399aa855bf6d066d5631b126dae0/diff:/var/lib/docker/overlay2/695fbfd66132a632f9cf21a1dbf1c4585ecf3d79d4ec664dc7322dbe57733e22/diff:/var/lib/docker/overlay2/db1f669858e6abde6d71803adf0e4dab516d446780d5e6b1fa82ed6e2c992d39/diff:/var/lib/docker/overlay2/fab89974c291c465525b131b7fd3c3d267c0435e58b67e536b1f5e99b0fe3552/diff:/var/lib/docker/overlay2/7d5946148c5ebf869abcd61af8cbd81254b96679a59bff1399fa76d06f970a03/diff:/var/lib/docker/overlay2/ac34ffb8ff292d487d8e0007c602732cac31fc43cc9dd73014f4f7f6731002e4/diff:/var/lib/docker/overlay2/c79772dfc8b60a34db55f8f7bdd7eb21bdb2ae1ebae9e19320eb82d243476de1/diff:/var/lib/docker/overlay2/5f0227571cb11adf4a20233b21288f6215d7ee4baa55da18a29c55f255c3f91b/diff:/var/lib/docker/overlay2/8f8a0a
55c9a3d7643b70fafbe1d581deef7a9142bb7504cade2efea33d17c8b6/diff:/var/lib/docker/overlay2/855d9e351347b1bfa0c8fcdd68ca509489970443ce6ac3f078a84319bbdbb0de/diff:/var/lib/docker/overlay2/d6da6485052539019c636fe8ca30537f92704bc855db6bb09a9228e17d5e5ee1/diff:/var/lib/docker/overlay2/3a712bb22c438ea19740b4d19771cd31cbd08e2f23647daf15e09967798d671d/diff:/var/lib/docker/overlay2/e8f4cc7b40bc0b3a9e62ea0d4f5ca169aab3e908980e13c881a98909769e05a7/diff:/var/lib/docker/overlay2/7364b0516116b13f8d51a574ea9312cc8be87bf0923e8ebe0018085133e57195/diff:/var/lib/docker/overlay2/10d8c9ca18bc3463470c25ce09aa92dc1df0366115c9fd5a22e67d1369e27b72/diff:/var/lib/docker/overlay2/e8ad5dbce212f833465ffdc136c8c744beb3bfe489d7f20f82084f854ab617cd/diff:/var/lib/docker/overlay2/391d7b820cdbb31a7bcc9bd350aff08e83bc2f5083fa09d2d7c1db69d1861b08/diff:/var/lib/docker/overlay2/394198ca9ba772f189cefae2c09414df3798734482a0159958ad4c74374079e8/diff:/var/lib/docker/overlay2/c3620c3c820e1cc79a02390c9ede0beacdc7fe42aa0e9564d27d6c793741eafe/diff:/var/lib/d
ocker/overlay2/9b11f1c010dca16f2c216392f2d3c5ec585e7d2ca91eb0a4824410accaba4ef3/diff:/var/lib/docker/overlay2/d8e94cabdfcf34c1c2ecb5355519daea41ba85e90131944f14c6c5faadb3f538/diff:/var/lib/docker/overlay2/335c17cc3e6bcc49659f681fefa84f63f496fab770f62dd31577690f8e3958b6/diff:/var/lib/docker/overlay2/5ef44871aef3ad96e532fdbc78e5379afd65c7ffd39bed734ed35daf134257b5/diff:/var/lib/docker/overlay2/ce73bde16589364238c0bb925bbd93f9b2b9c5e2f3267cc196298f62fbc08342/diff:/var/lib/docker/overlay2/461113b8bc693d226593885e543b82eac9a75ea77d0bcdaa60551cca12495538/diff:/var/lib/docker/overlay2/f7d47793cf5882d3e0b92ebb0d7d2456fc621d6db83cb2439f96c4b248b11d25/diff:/var/lib/docker/overlay2/a8e74e4377f38c1a50d9a335bfc92405a4df112abdcbd2555cbe3b592f071fd5/diff:/var/lib/docker/overlay2/405812e0a303b666cd7c1c0102d8f415494b9641e1f5ab9404e146c2265592cb/diff:/var/lib/docker/overlay2/deecfc978d174b5d2c0a209b450d0fa15828234099690cc9092c6ff67a1926d2/diff:/var/lib/docker/overlay2/6fa41c9e75c99fb82729fdd55e5653ce5b7edf256a1dd8791c3012cf210
7f486/diff:/var/lib/docker/overlay2/2dd2dde99da44abd645912f40fdb7d06e201a622cccf049222fa9a53ab6ca234/diff:/var/lib/docker/overlay2/a73187a91c6737ec4627be55f4b58dab9d4ef30412857cbf1cd6e6778962c9f4/diff:/var/lib/docker/overlay2/7fcd2796c0a1717ddf6c90aad88aff2e11a87b836d8761e756b6bc7a292ed570/diff:/var/lib/docker/overlay2/276597df229fc32d0d371563f135664fa4bef3fbc20372998b7b051504e6188a/diff:/var/lib/docker/overlay2/28f6cf4ea77b5f1df2373079b5b3c9b2ec7e95488cec51c54e7ff22f8fea2f36/diff:/var/lib/docker/overlay2/301627855ef95ac8b04f9b404290e80b6a94b9637ec2ca0c31b5701c6ac786fd/diff:/var/lib/docker/overlay2/a589a72c723642d2bb727fead8edfcaffaca10eed1bb4af32fac19fb6fc32874/diff:/var/lib/docker/overlay2/90d1c9e6fe8a1c74ac53d78f9a0b7ee36fc624becac59c2a6056c004ebe45e1e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/832ca370b586d0f789f990478492352b903c0ef1c8df2738e208181b0594ec79/merged",
	                "UpperDir": "/var/lib/docker/overlay2/832ca370b586d0f789f990478492352b903c0ef1c8df2738e208181b0594ec79/diff",
	                "WorkDir": "/var/lib/docker/overlay2/832ca370b586d0f789f990478492352b903c0ef1c8df2738e208181b0594ec79/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-20210915015618-22140",
	                "Source": "/var/lib/docker/volumes/functional-20210915015618-22140/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-20210915015618-22140",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-20210915015618-22140",
	                "name.minikube.sigs.k8s.io": "functional-20210915015618-22140",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ed1763b50256269b82626593465d25de1d7af51c624f3ad4ed7ef46b112c23da",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57164"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57165"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57166"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57167"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57168"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/ed1763b50256",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-20210915015618-22140": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "3327eb4f9ec0",
	                        "functional-20210915015618-22140"
	                    ],
	                    "NetworkID": "d98dd2d5d44882b95f50124b7a70977d2ea24a84ca00be13be6e63eae30114c2",
	                    "EndpointID": "ca98e7360e179b4d6026562bd764f6af00bf09a9132cb4cf85095bc2cf475248",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20210915015618-22140 -n functional-20210915015618-22140
helpers_test.go:240: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20210915015618-22140 -n functional-20210915015618-22140: (6.571272s)
helpers_test.go:245: <<< TestFunctional/parallel/LoadImageFromFile FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestFunctional/parallel/LoadImageFromFile]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915015618-22140 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915015618-22140 logs -n 25: (19.934986s)
helpers_test.go:253: TestFunctional/parallel/LoadImageFromFile logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------------------|---------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	| Command |                                  Args                                  |             Profile             |          User           | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------------------|---------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	| -p      | functional-20210915015618-22140                                        | functional-20210915015618-22140 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 02:03:22 GMT | Wed, 15 Sep 2021 02:03:22 GMT |
	|         | config unset cpus                                                      |                                 |                         |         |                               |                               |
	| -p      | functional-20210915015618-22140                                        | functional-20210915015618-22140 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 02:03:20 GMT | Wed, 15 Sep 2021 02:03:24 GMT |
	|         | ssh sudo cat                                                           |                                 |                         |         |                               |                               |
	|         | /etc/ssl/certs/22140.pem                                               |                                 |                         |         |                               |                               |
	| profile | list --output json                                                     | minikube                        | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 02:03:23 GMT | Wed, 15 Sep 2021 02:03:27 GMT |
	| -p      | functional-20210915015618-22140                                        | functional-20210915015618-22140 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 02:03:24 GMT | Wed, 15 Sep 2021 02:03:28 GMT |
	|         | ssh sudo cat                                                           |                                 |                         |         |                               |                               |
	|         | /usr/share/ca-certificates/22140.pem                                   |                                 |                         |         |                               |                               |
	| -p      | functional-20210915015618-22140                                        | functional-20210915015618-22140 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 02:03:23 GMT | Wed, 15 Sep 2021 02:03:28 GMT |
	|         | cp testdata\cp-test.txt                                                |                                 |                         |         |                               |                               |
	|         | /home/docker/cp-test.txt                                               |                                 |                         |         |                               |                               |
	| profile | list                                                                   | minikube                        | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 02:03:28 GMT | Wed, 15 Sep 2021 02:03:33 GMT |
	| -p      | functional-20210915015618-22140                                        | functional-20210915015618-22140 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 02:03:29 GMT | Wed, 15 Sep 2021 02:03:33 GMT |
	|         | ssh sudo cat                                                           |                                 |                         |         |                               |                               |
	|         | /home/docker/cp-test.txt                                               |                                 |                         |         |                               |                               |
	| profile | list -l                                                                | minikube                        | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 02:03:34 GMT | Wed, 15 Sep 2021 02:03:34 GMT |
	| -p      | functional-20210915015618-22140                                        | functional-20210915015618-22140 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 02:03:29 GMT | Wed, 15 Sep 2021 02:03:35 GMT |
	|         | ssh sudo cat                                                           |                                 |                         |         |                               |                               |
	|         | /etc/ssl/certs/51391683.0                                              |                                 |                         |         |                               |                               |
	| profile | list -o json                                                           | minikube                        | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 02:03:34 GMT | Wed, 15 Sep 2021 02:03:39 GMT |
	| -p      | functional-20210915015618-22140                                        | functional-20210915015618-22140 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 02:03:35 GMT | Wed, 15 Sep 2021 02:03:39 GMT |
	|         | ssh sudo cat                                                           |                                 |                         |         |                               |                               |
	|         | /etc/ssl/certs/221402.pem                                              |                                 |                         |         |                               |                               |
	| profile | list -o json --light                                                   | minikube                        | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 02:03:39 GMT | Wed, 15 Sep 2021 02:03:39 GMT |
	| -p      | functional-20210915015618-22140                                        | functional-20210915015618-22140 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 02:03:40 GMT | Wed, 15 Sep 2021 02:03:43 GMT |
	|         | ssh sudo cat                                                           |                                 |                         |         |                               |                               |
	|         | /usr/share/ca-certificates/221402.pem                                  |                                 |                         |         |                               |                               |
	| -p      | functional-20210915015618-22140                                        | functional-20210915015618-22140 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 02:03:44 GMT | Wed, 15 Sep 2021 02:03:48 GMT |
	|         | ssh sudo cat                                                           |                                 |                         |         |                               |                               |
	|         | /etc/ssl/certs/3ec20f2e.0                                              |                                 |                         |         |                               |                               |
	| -p      | functional-20210915015618-22140 image load                             | functional-20210915015618-22140 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 02:03:48 GMT | Wed, 15 Sep 2021 02:03:54 GMT |
	|         | docker.io/library/busybox:remove-functional-20210915015618-22140       |                                 |                         |         |                               |                               |
	| -p      | functional-20210915015618-22140 image rm                               | functional-20210915015618-22140 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 02:03:55 GMT | Wed, 15 Sep 2021 02:03:57 GMT |
	|         | docker.io/library/busybox:remove-functional-20210915015618-22140       |                                 |                         |         |                               |                               |
	| -p      | functional-20210915015618-22140                                        | functional-20210915015618-22140 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 02:03:54 GMT | Wed, 15 Sep 2021 02:03:58 GMT |
	|         | image ls                                                               |                                 |                         |         |                               |                               |
	| ssh     | -p                                                                     | functional-20210915015618-22140 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 02:03:57 GMT | Wed, 15 Sep 2021 02:04:02 GMT |
	|         | functional-20210915015618-22140                                        |                                 |                         |         |                               |                               |
	|         | -- docker images                                                       |                                 |                         |         |                               |                               |
	| -p      | functional-20210915015618-22140 image build -t                         | functional-20210915015618-22140 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 02:03:58 GMT | Wed, 15 Sep 2021 02:04:08 GMT |
	|         | localhost/my-image:functional-20210915015618-22140                     |                                 |                         |         |                               |                               |
	|         | testdata\build                                                         |                                 |                         |         |                               |                               |
	| -p      | functional-20210915015618-22140                                        | functional-20210915015618-22140 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 02:04:03 GMT | Wed, 15 Sep 2021 02:04:08 GMT |
	|         | image pull                                                             |                                 |                         |         |                               |                               |
	|         | docker.io/library/busybox:1.30                                         |                                 |                         |         |                               |                               |
	| -p      | functional-20210915015618-22140                                        | functional-20210915015618-22140 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 02:04:04 GMT | Wed, 15 Sep 2021 02:04:10 GMT |
	|         | service list                                                           |                                 |                         |         |                               |                               |
	| -p      | functional-20210915015618-22140 image                                  | functional-20210915015618-22140 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 02:04:09 GMT | Wed, 15 Sep 2021 02:04:13 GMT |
	|         | tag docker.io/library/busybox:1.30                                     |                                 |                         |         |                               |                               |
	|         | docker.io/library/busybox:save-to-file-functional-20210915015618-22140 |                                 |                         |         |                               |                               |
	| ssh     | -p functional-20210915015618-22140                                     | functional-20210915015618-22140 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 02:04:09 GMT | Wed, 15 Sep 2021 02:04:14 GMT |
	|         | -- docker image inspect                                                |                                 |                         |         |                               |                               |
	|         | localhost/my-image:functional-20210915015618-22140                     |                                 |                         |         |                               |                               |
	| -p      | functional-20210915015618-22140 image save                             | functional-20210915015618-22140 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 02:04:14 GMT | Wed, 15 Sep 2021 02:04:20 GMT |
	|         | docker.io/library/busybox:save-to-file-functional-20210915015618-22140 |                                 |                         |         |                               |                               |
	|         | C:\jenkins\workspace\Docker_Windows_integration\busybox-save.tar       |                                 |                         |         |                               |                               |
	| -p      | functional-20210915015618-22140 image load                             | functional-20210915015618-22140 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 02:04:16 GMT | Wed, 15 Sep 2021 02:04:24 GMT |
	|         | C:\jenkins\workspace\Docker_Windows_integration\busybox-load.tar       |                                 |                         |         |                               |                               |
	|---------|------------------------------------------------------------------------|---------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/09/15 02:03:39
	Running on machine: windows-server-1
	Binary: Built with gc go1.17 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0915 02:03:39.866358   42656 out.go:298] Setting OutFile to fd 1532 ...
	I0915 02:03:39.867912   42656 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0915 02:03:39.867912   42656 out.go:311] Setting ErrFile to fd 1536...
	I0915 02:03:39.868357   42656 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0915 02:03:39.885382   42656 out.go:305] Setting JSON to false
	I0915 02:03:39.892361   42656 start.go:111] hostinfo: {"hostname":"windows-server-1","uptime":10274202,"bootTime":1621397217,"procs":152,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9879231f-6171-435d-bab4-5b366cc6391b"}
	W0915 02:03:39.892361   42656 start.go:119] gopshost.Virtualization returned error: not implemented yet
	I0915 02:03:39.898400   42656 out.go:177] * [functional-20210915015618-22140] minikube v1.23.0 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	I0915 02:03:39.902539   42656 out.go:177]   - KUBECONFIG=C:\Users\jenkins\minikube-integration\kubeconfig
	I0915 02:03:39.905736   42656 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins\minikube-integration\.minikube
	I0915 02:03:39.909041   42656 out.go:177]   - MINIKUBE_LOCATION=12425
	I0915 02:03:39.934527   42656 config.go:177] Loaded profile config "functional-20210915015618-22140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.1
	I0915 02:03:39.935651   42656 driver.go:343] Setting default libvirt URI to qemu:///system
	I0915 02:03:40.578796   42656 docker.go:132] docker version: linux-20.10.5
	I0915 02:03:40.588769   42656 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0915 02:03:41.944748   42656 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.3559834s)
	I0915 02:03:41.951550   42656 info.go:263] docker info: {ID:6FWJ:GOIP:3UEJ:4EPV:BN5V:RES7:2PF6:QH5I:B3LP:YJP2:JQEM:LHWQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:53 SystemTime:2021-09-15 02:03:41.346114 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index
.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] C
lientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0915 02:03:41.956673   42656 out.go:177] * Using the docker driver based on existing profile
	I0915 02:03:41.956941   42656 start.go:278] selected driver: docker
	I0915 02:03:41.957315   42656 start.go:751] validating driver "docker" against &{Name:functional-20210915015618-22140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.1 ClusterName:functional-20210915015618-22140 Namespace:default APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.22.1 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-prov
isioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0915 02:03:41.957315   42656 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0915 02:03:42.006778   42656 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0915 02:03:43.326497   42656 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.3195279s)
	I0915 02:03:43.328075   42656 info.go:263] docker info: {ID:6FWJ:GOIP:3UEJ:4EPV:BN5V:RES7:2PF6:QH5I:B3LP:YJP2:JQEM:LHWQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:54 SystemTime:2021-09-15 02:03:42.7441726 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0915 02:03:43.424256   42656 cni.go:93] Creating CNI manager for ""
	I0915 02:03:43.424256   42656 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0915 02:03:43.424256   42656 start_flags.go:278] config:
	{Name:functional-20210915015618-22140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.1 ClusterName:functional-20210915015618-22140 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.22.1 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddo
nImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2021-09-15 01:56:35 UTC, end at Wed 2021-09-15 02:04:40 UTC. --
	Sep 15 02:01:36 functional-20210915015618-22140 dockerd[787]: time="2021-09-15T02:01:36.695809800Z" level=info msg="ignoring event" container=49beb5d9968abf3b118db461d710487677da08b8528fac3f50725fc2db5b9581 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 02:01:36 functional-20210915015618-22140 dockerd[787]: time="2021-09-15T02:01:36.709942100Z" level=info msg="ignoring event" container=d253574a74d33ab68bd0e42897722b29c258a35b14a745809c2f4b148f4d75a8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 02:01:36 functional-20210915015618-22140 dockerd[787]: time="2021-09-15T02:01:36.723915600Z" level=info msg="ignoring event" container=5ac1d53de6ecb6371d5040666c40ceb743c3b7dd8e97552af153140f0c8071b0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 02:01:36 functional-20210915015618-22140 dockerd[787]: time="2021-09-15T02:01:36.947334000Z" level=info msg="ignoring event" container=4e8b5426cc58c62e244dd8071abb776206d37785206a4fb40b39f48cd9291e96 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 02:01:37 functional-20210915015618-22140 dockerd[787]: time="2021-09-15T02:01:37.024900600Z" level=info msg="ignoring event" container=d0bb6744c76666391c3f6b62195437255408e4ec481bc0aec96813a274342cb1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 02:01:37 functional-20210915015618-22140 dockerd[787]: time="2021-09-15T02:01:37.122405800Z" level=info msg="ignoring event" container=8f7ca71e289b0e432929182cc5472a4d191accbea761f953de80e48d3337b9ac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 02:01:37 functional-20210915015618-22140 dockerd[787]: time="2021-09-15T02:01:37.209160500Z" level=info msg="ignoring event" container=394ea39c4e8cb2d4f8e213d14d501995af7c83c8e1de6cbe288d38f1de97721b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 02:01:37 functional-20210915015618-22140 dockerd[787]: time="2021-09-15T02:01:37.210924700Z" level=info msg="ignoring event" container=471b58fce2c114e5ee2a27a555513d9b3c95ad9db4db9b33feaa04c18ef1695f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 02:01:37 functional-20210915015618-22140 dockerd[787]: time="2021-09-15T02:01:37.286407900Z" level=info msg="ignoring event" container=5b333d3f35d1206e8447ca75f97b475dbffe28e4b39d100151b6411597473f54 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 02:01:38 functional-20210915015618-22140 dockerd[787]: time="2021-09-15T02:01:38.028417100Z" level=info msg="ignoring event" container=541a642f582caf560f966f03bd03af2280753f28711e660e6684af704e83ea3b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 02:01:38 functional-20210915015618-22140 dockerd[787]: time="2021-09-15T02:01:38.887381500Z" level=info msg="ignoring event" container=dfa20a022dbf0504c218b35859213a3eab19d99b7db6ef7883c3d0936df079f9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 02:01:41 functional-20210915015618-22140 dockerd[787]: time="2021-09-15T02:01:41.586868800Z" level=info msg="ignoring event" container=2d2b87db61f2f73ea90a9f4a68365e665e52c8f94f1e3b858742111016632a27 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 02:01:42 functional-20210915015618-22140 dockerd[787]: time="2021-09-15T02:01:42.686607500Z" level=error msg="Handler for GET /v1.41/containers/916ec8136bf53bab76c121914051df1e970bc638c95e2757dc4f4d660ee607a5/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
	Sep 15 02:01:42 functional-20210915015618-22140 dockerd[787]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
	Sep 15 02:01:43 functional-20210915015618-22140 dockerd[787]: time="2021-09-15T02:01:43.037686600Z" level=error msg="Handler for GET /v1.41/containers/33b8e6081cb2fb049a9b7b0b3ba3875c51d917b77b3483ff138bc43d8d48e2e2/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
	Sep 15 02:01:43 functional-20210915015618-22140 dockerd[787]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
	Sep 15 02:01:44 functional-20210915015618-22140 dockerd[787]: time="2021-09-15T02:01:44.337291500Z" level=info msg="ignoring event" container=2345d4c4037d18d58739be97c2e2bd077dd9e115b8784eee8e3d34b93c91cf13 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 02:02:04 functional-20210915015618-22140 dockerd[787]: time="2021-09-15T02:02:04.097890000Z" level=info msg="ignoring event" container=833db07f46f3084c53c1717ff11553c782ca75fb0c4da3ce91d0a2d70e907d10 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 02:02:06 functional-20210915015618-22140 dockerd[787]: time="2021-09-15T02:02:06.300205100Z" level=info msg="ignoring event" container=b21667c365a6b1b4dd69736ec84f536b69cecb0290367593777b219c1171c203 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 02:02:08 functional-20210915015618-22140 dockerd[787]: time="2021-09-15T02:02:08.789083100Z" level=info msg="ignoring event" container=f511b0b78fb5c7fa38d7bda50862be8f8512692e527d667b8192a9ae39d79fd4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 02:02:09 functional-20210915015618-22140 dockerd[787]: time="2021-09-15T02:02:09.002585500Z" level=info msg="ignoring event" container=d86a240ab762fd651838fe9734a44b5993a2de4bf0bcf6fc4ff412488c2dc690 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 02:02:09 functional-20210915015618-22140 dockerd[787]: time="2021-09-15T02:02:09.527869300Z" level=info msg="ignoring event" container=33b8e6081cb2fb049a9b7b0b3ba3875c51d917b77b3483ff138bc43d8d48e2e2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 02:02:22 functional-20210915015618-22140 dockerd[787]: time="2021-09-15T02:02:22.975337800Z" level=info msg="ignoring event" container=a55829c50e2bc3cb9eadaa1e220344d885cd36155f8444bfd141e41585f427df module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 02:04:06 functional-20210915015618-22140 dockerd[787]: time="2021-09-15T02:04:06.055920500Z" level=info msg="ignoring event" container=1204aa4de794482378dd0a06e58b8ec2004b1fe8ebecb319c8b2af300262a034 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 02:04:06 functional-20210915015618-22140 dockerd[787]: time="2021-09-15T02:04:06.827964900Z" level=info msg="Layer sha256:12611729abe769f611aa754a4734cecf22ff5cd0ffd1bb14d56dd4dbef61d809 cleaned up"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                           CREATED              STATE               NAME                      ATTEMPT             POD ID
	0ae898a10c363       k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969   46 seconds ago       Running             echoserver                0                   2f53ed503a062
	60e8259cb8e8d       8d147537fb7d1                                                                                   About a minute ago   Running             coredns                   1                   71135db54b292
	b96e237d2a2ae       6e002eb89a881                                                                                   About a minute ago   Running             kube-controller-manager   3                   0eab591d2c2d0
	30f50896bfb06       6e38f40d628db                                                                                   About a minute ago   Running             storage-provisioner       2                   ab7b1b3842683
	e27702200934c       f30469a2491a5                                                                                   2 minutes ago        Running             kube-apiserver            2                   b74ba3663a9ed
	a55829c50e2bc       6e002eb89a881                                                                                   2 minutes ago        Exited              kube-controller-manager   2                   0eab591d2c2d0
	b21667c365a6b       f30469a2491a5                                                                                   2 minutes ago        Exited              kube-apiserver            1                   b74ba3663a9ed
	6ed51bf0bd324       aca5ededae9c8                                                                                   3 minutes ago        Running             kube-scheduler            1                   636d41e6d0f4a
	0ebb9f4b9e5a4       0048118155842                                                                                   3 minutes ago        Running             etcd                      1                   b4428cac0e90a
	2345d4c4037d1       6e38f40d628db                                                                                   3 minutes ago        Exited              storage-provisioner       1                   ab7b1b3842683
	916ec8136bf53       36c4ebbc9d979                                                                                   3 minutes ago        Running             kube-proxy                1                   0a70e139ec88c
	2d2b87db61f2f       8d147537fb7d1                                                                                   4 minutes ago        Exited              coredns                   0                   471b58fce2c11
	5b333d3f35d12       36c4ebbc9d979                                                                                   5 minutes ago        Exited              kube-proxy                0                   394ea39c4e8cb
	541a642f582ca       aca5ededae9c8                                                                                   5 minutes ago        Exited              kube-scheduler            0                   d0bb6744c7666
	8f7ca71e289b0       0048118155842                                                                                   5 minutes ago        Exited              etcd                      0                   e53595456d657
	
	* 
	* ==> coredns [2d2b87db61f2] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.4
	linux/amd64, go1.16.4, 053c4d5
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	[INFO] Reloading complete
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [60e8259cb8e8] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	CoreDNS-1.8.4
	linux/amd64, go1.16.4, 053c4d5
	
	* 
	* ==> describe nodes <==
	* Name:               functional-20210915015618-22140
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-20210915015618-22140
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7d234465a435c40d154c10f5ac847cc10f4e5fc3
	                    minikube.k8s.io/name=functional-20210915015618-22140
	                    minikube.k8s.io/updated_at=2021_09_15T01_59_24_0700
	                    minikube.k8s.io/version=v1.23.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 15 Sep 2021 01:59:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-20210915015618-22140
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 15 Sep 2021 02:04:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 15 Sep 2021 02:04:34 +0000   Wed, 15 Sep 2021 01:59:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 15 Sep 2021 02:04:34 +0000   Wed, 15 Sep 2021 01:59:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 15 Sep 2021 02:04:34 +0000   Wed, 15 Sep 2021 01:59:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 15 Sep 2021 02:04:34 +0000   Wed, 15 Sep 2021 01:59:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-20210915015618-22140
	Capacity:
	  cpu:                4
	  ephemeral-storage:  65792556Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             20481980Ki
	  pods:               110
	Allocatable:
	  cpu:                4
	  ephemeral-storage:  65792556Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             20481980Ki
	  pods:               110
	System Info:
	  Machine ID:                 4b5e5cdd53d44f5ab575bb522d42acca
	  System UUID:                d24ce515-44c1-4598-ba52-8665f22e310b
	  Boot ID:                    31a72c78-717c-4979-9c6b-d3a794aac31d
	  Kernel Version:             4.19.121-linuxkit
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.8
	  Kubelet Version:            v1.22.1
	  Kube-Proxy Version:         v1.22.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-6cbfcd7cbc-qcw6b                                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         83s
	  default                     mysql-9bbbc5bbb-fk78l                                      600m (15%!)(MISSING)    700m (17%!)(MISSING)  512Mi (2%!)(MISSING)       700Mi (3%!)(MISSING)     28s
	  default                     nginx-svc                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19s
	  kube-system                 coredns-78fcd69978-mjv2b                                   100m (2%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     5m8s
	  kube-system                 etcd-functional-20210915015618-22140                       100m (2%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         5m22s
	  kube-system                 kube-apiserver-functional-20210915015618-22140             250m (6%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m6s
	  kube-system                 kube-controller-manager-functional-20210915015618-22140    200m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m24s
	  kube-system                 kube-proxy-d2l7d                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m8s
	  kube-system                 kube-scheduler-functional-20210915015618-22140             100m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m15s
	  kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (33%!)(MISSING)  700m (17%!)(MISSING)
	  memory             682Mi (3%!)(MISSING)   870Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From     Message
	  ----    ------                   ----                   ----     -------
	  Normal  NodeHasSufficientPID     5m44s (x7 over 5m45s)  kubelet  Node functional-20210915015618-22140 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  5m43s (x8 over 5m45s)  kubelet  Node functional-20210915015618-22140 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m43s (x8 over 5m45s)  kubelet  Node functional-20210915015618-22140 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 5m17s                  kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m17s                  kubelet  Node functional-20210915015618-22140 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m17s                  kubelet  Node functional-20210915015618-22140 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m17s                  kubelet  Node functional-20210915015618-22140 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             5m16s                  kubelet  Node functional-20210915015618-22140 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  5m15s                  kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m7s                   kubelet  Node functional-20210915015618-22140 status is now: NodeReady
	  Normal  Starting                 2m46s                  kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m44s                  kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     2m43s (x7 over 2m45s)  kubelet  Node functional-20210915015618-22140 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m42s (x8 over 2m45s)  kubelet  Node functional-20210915015618-22140 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m42s (x8 over 2m45s)  kubelet  Node functional-20210915015618-22140 status is now: NodeHasNoDiskPressure
	
	* 
	* ==> dmesg <==
	* [  +0.000003]  ? hrtimer_init+0xde/0xde
	[  +0.000001]  hrtimer_wakeup+0x1e/0x21
	[  +0.000005]  __hrtimer_run_queues+0x117/0x1c4
	[  +0.000002]  ? ktime_get_update_offsets_now+0x36/0x95
	[  +0.000002]  hrtimer_interrupt+0x92/0x165
	[  +0.000003]  hv_stimer0_isr+0x20/0x2d
	[  +0.000007]  hv_stimer0_vector_handler+0x3b/0x57
	[  +0.000009]  hv_stimer0_callback_vector+0xf/0x20
	[  +0.000001]  </IRQ>
	[  +0.000001] RIP: 0010:native_safe_halt+0x7/0x8
	[  +0.000002] Code: 60 02 df f0 83 44 24 fc 00 48 8b 00 a8 08 74 0b 65 81 25 fd b5 6f 69 ff ff ff 7f c3 e8 77 ce 72 ff f4 c3 e8 70 ce 72 ff fb f4 <c3> 0f 1f 44 00 00 53 e8 f1 f5 81 ff 65 8b 35 b3 4b 6f 69 31 ff e8
	[  +0.000001] RSP: 0018:ffff98b6000a3ec8 EFLAGS: 00000246 ORIG_RAX: ffffffffffffff12
	[  +0.000001] RAX: ffffffff9691a410 RBX: 0000000000000001 RCX: ffffffff97253150
	[  +0.000001] RDX: 00000000001bfb3e RSI: 0000000000000001 RDI: 0000000000000001
	[  +0.000001] RBP: 0000000000000000 R08: 011cf099150136ab R09: 0000000000000002
	[  +0.000000] R10: ffff8b9f6df73938 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: ffff8b9fae19e1c0 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000002]  ? ldsem_down_write+0x1da/0x1da
	[  +0.000009]  ? native_safe_halt+0x5/0x8
	[  +0.000001]  default_idle+0x1b/0x2c
	[  +0.000001]  do_idle+0xe5/0x216
	[  +0.000002]  cpu_startup_entry+0x6f/0x71
	[  +0.000003]  start_secondary+0x18e/0x1a9
	[  +0.000006]  secondary_startup_64+0xa4/0xb0
	[  +0.000005] ---[ end trace f027fbf82db24e21 ]---
	
	* 
	* ==> etcd [0ebb9f4b9e5a] <==
	* {"level":"info","ts":"2021-09-15T02:01:44.108Z","caller":"etcdserver/server.go:834","msg":"starting etcd server","local-member-id":"aec36adc501070cc","local-server-version":"3.5.0","cluster-id":"fa54960ea34d58be","cluster-version":"3.5"}
	{"level":"info","ts":"2021-09-15T02:01:44.110Z","caller":"etcdserver/server.go:728","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"aec36adc501070cc","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2021-09-15T02:01:44.111Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2021-09-15T02:01:44.111Z","caller":"membership/cluster.go:393","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2021-09-15T02:01:44.112Z","caller":"membership/cluster.go:523","msg":"updated cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","from":"3.5","to":"3.5"}
	{"level":"info","ts":"2021-09-15T02:01:44.131Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2021-09-15T02:01:44.131Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2021-09-15T02:01:44.131Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2021-09-15T02:01:44.133Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2021-09-15T02:01:44.135Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2021-09-15T02:01:44.185Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2021-09-15T02:01:44.185Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2021-09-15T02:01:44.185Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2021-09-15T02:01:44.185Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2021-09-15T02:01:44.185Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2021-09-15T02:01:44.185Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2021-09-15T02:01:44.135Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2021-09-15T02:01:44.186Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-20210915015618-22140 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2021-09-15T02:01:44.186Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-09-15T02:01:44.194Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2021-09-15T02:01:44.194Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2021-09-15T02:01:44.194Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-09-15T02:01:44.195Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2021-09-15T02:01:44.198Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2021-09-15T02:02:37.424Z","caller":"traceutil/trace.go:171","msg":"trace[1272439478] transaction","detail":"{read_only:false; response_revision:606; number_of_response:1; }","duration":"103.9481ms","start":"2021-09-15T02:02:37.320Z","end":"2021-09-15T02:02:37.424Z","steps":["trace[1272439478] 'compare'  (duration: 84.6003ms)"],"step_count":1}
	
	* 
	* ==> etcd [8f7ca71e289b] <==
	* {"level":"info","ts":"2021-09-15T01:59:04.814Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2021-09-15T01:59:04.814Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2021-09-15T01:59:04.816Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2021-09-15T01:59:04.835Z","caller":"membership/cluster.go:531","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2021-09-15T01:59:04.836Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2021-09-15T01:59:04.836Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2021-09-15T01:59:04.836Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-20210915015618-22140 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2021-09-15T01:59:04.836Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-09-15T01:59:04.889Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-09-15T01:59:04.898Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2021-09-15T01:59:04.902Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2021-09-15T01:59:04.902Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2021-09-15T01:59:05.002Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2021-09-15T01:59:17.930Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"101.3786ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/serviceips\" ","response":"range_response_count:1 size:114"}
	{"level":"info","ts":"2021-09-15T01:59:17.930Z","caller":"traceutil/trace.go:171","msg":"trace[1884314352] range","detail":"{range_begin:/registry/ranges/serviceips; range_end:; response_count:1; response_revision:30; }","duration":"101.5897ms","start":"2021-09-15T01:59:17.828Z","end":"2021-09-15T01:59:17.930Z","steps":["trace[1884314352] 'agreement among raft nodes before linearized reading'  (duration: 101.353ms)"],"step_count":1}
	{"level":"warn","ts":"2021-09-15T01:59:34.816Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"103.7588ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/bootstrap-signer\" ","response":"range_response_count:1 size:245"}
	{"level":"info","ts":"2021-09-15T01:59:34.816Z","caller":"traceutil/trace.go:171","msg":"trace[1540254425] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/bootstrap-signer; range_end:; response_count:1; response_revision:375; }","duration":"103.9201ms","start":"2021-09-15T01:59:34.712Z","end":"2021-09-15T01:59:34.816Z","steps":["trace[1540254425] 'agreement among raft nodes before linearized reading'  (duration: 89.8639ms)"],"step_count":1}
	{"level":"info","ts":"2021-09-15T02:01:35.916Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2021-09-15T02:01:35.917Z","caller":"embed/etcd.go:367","msg":"closing etcd server","name":"functional-20210915015618-22140","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	WARNING: 2021/09/15 02:01:35 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2021-09-15T02:01:36.087Z","caller":"etcdserver/server.go:1438","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	WARNING: 2021/09/15 02:01:36 [core] grpc: addrConn.createTransport failed to connect to {192.168.49.2:2379 192.168.49.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.49.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2021-09-15T02:01:36.095Z","caller":"embed/etcd.go:562","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2021-09-15T02:01:36.096Z","caller":"embed/etcd.go:567","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2021-09-15T02:01:36.096Z","caller":"embed/etcd.go:369","msg":"closed etcd server","name":"functional-20210915015618-22140","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	* 
	* ==> kernel <==
	*  02:04:45 up 41 min,  0 users,  load average: 6.18, 4.73, 6.14
	Linux functional-20210915015618-22140 4.19.121-linuxkit #1 SMP Thu Jan 21 15:36:34 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [b21667c365a6] <==
	* I0915 02:02:05.839849       1 server.go:553] external host was not specified, using 192.168.49.2
	I0915 02:02:05.842511       1 server.go:161] Version: v1.22.1
	Error: failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use
	
	* 
	* ==> kube-apiserver [e27702200934] <==
	* I0915 02:02:36.505268       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0915 02:02:36.587320       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0915 02:02:36.590987       1 cache.go:39] Caches are synced for autoregister controller
	I0915 02:02:36.687454       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0915 02:02:36.802271       1 apf_controller.go:304] Running API Priority and Fairness config worker
	I0915 02:02:36.816248       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0915 02:02:36.821425       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0915 02:02:37.006460       1 trace.go:205] Trace[262615254]: "Patch" url:/api/v1/namespaces/default/events/functional-20210915015618-22140.16a4dc6b3535dc9c,user-agent:kubelet/v1.22.1 (linux/amd64) kubernetes/632ed30,audit-id:1c19f784-4af4-46a8-9e08-3b2c49e52ded,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (15-Sep-2021 02:02:36.505) (total time: 500ms):
	Trace[262615254]: ---"Object stored in database" 482ms (02:02:37.006)
	Trace[262615254]: [500.5165ms] [500.5165ms] END
	I0915 02:02:37.096661       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0915 02:02:37.116886       1 trace.go:205] Trace[1462565212]: "Patch" url:/api/v1/namespaces/kube-system/pods/etcd-functional-20210915015618-22140/status,user-agent:kubelet/v1.22.1 (linux/amd64) kubernetes/632ed30,audit-id:5731eefc-45be-47a2-80aa-a1b01762b346,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (15-Sep-2021 02:02:36.610) (total time: 506ms):
	Trace[1462565212]: ---"About to check admission control" 484ms (02:02:37.095)
	Trace[1462565212]: [506.4909ms] [506.4909ms] END
	I0915 02:02:37.385877       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0915 02:02:37.385955       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0915 02:02:37.497203       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0915 02:02:40.017664       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0915 02:02:40.139731       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0915 02:02:40.417219       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0915 02:02:40.510203       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0915 02:02:40.541163       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0915 02:03:01.815070       1 controller.go:611] quota admission added evaluator for: endpoints
	I0915 02:03:01.829081       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0915 02:03:20.594966       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [a55829c50e2b] <==
	* 	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00045ba00, 0x5175b80, 0xc00090b0e0, 0x4c62201, 0xc000114360)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0x9b
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00045ba00, 0x3b9aca00, 0x0, 0x1, 0xc000114360)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc00045ba00, 0x3b9aca00, 0xc000114360)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d
	created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicServingCertificateController).Run
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/tlsconfig.go:247 +0x1d2
	
	goroutine 152 [select]:
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00045ba10, 0x5175b80, 0xc00090b050, 0x1, 0xc000114360)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x118
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00045ba10, 0xdf8475800, 0x0, 0x1, 0xc000114360)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc00045ba10, 0xdf8475800, 0xc000114360)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d
	created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicServingCertificateController).Run
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/tlsconfig.go:250 +0x24b
	
	goroutine 110 [select]:
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poller.func1.1(0xc0000b05a0, 0xdf8475800, 0x0, 0x51d55b0, 0xc000134880)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:705 +0x156
	created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poller.func1
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:688 +0x96
	
	* 
	* ==> kube-controller-manager [b96e237d2a2a] <==
	* I0915 02:03:01.606068       1 shared_informer.go:247] Caches are synced for endpoint 
	I0915 02:03:01.606324       1 shared_informer.go:247] Caches are synced for taint 
	I0915 02:03:01.606524       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: 
	W0915 02:03:01.606665       1 node_lifecycle_controller.go:1013] Missing timestamp for Node functional-20210915015618-22140. Assuming now as a timestamp.
	I0915 02:03:01.606725       1 node_lifecycle_controller.go:1214] Controller detected that zone  is now in state Normal.
	I0915 02:03:01.611317       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0915 02:03:01.612462       1 event.go:291] "Event occurred" object="functional-20210915015618-22140" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-20210915015618-22140 event: Registered Node functional-20210915015618-22140 in Controller"
	I0915 02:03:01.614410       1 shared_informer.go:247] Caches are synced for deployment 
	I0915 02:03:01.622872       1 shared_informer.go:247] Caches are synced for resource quota 
	I0915 02:03:01.632407       1 shared_informer.go:247] Caches are synced for expand 
	I0915 02:03:01.685053       1 shared_informer.go:247] Caches are synced for PVC protection 
	I0915 02:03:01.694655       1 shared_informer.go:247] Caches are synced for attach detach 
	I0915 02:03:01.699734       1 shared_informer.go:247] Caches are synced for stateful set 
	I0915 02:03:01.704424       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0915 02:03:01.715725       1 shared_informer.go:247] Caches are synced for ephemeral 
	I0915 02:03:01.717100       1 shared_informer.go:247] Caches are synced for resource quota 
	I0915 02:03:01.720398       1 shared_informer.go:247] Caches are synced for cronjob 
	I0915 02:03:01.798811       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0915 02:03:02.112081       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0915 02:03:02.112444       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0915 02:03:02.200340       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0915 02:03:20.621101       1 event.go:291] "Event occurred" object="default/hello-node" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-6cbfcd7cbc to 1"
	I0915 02:03:20.797870       1 event.go:291] "Event occurred" object="default/hello-node-6cbfcd7cbc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-6cbfcd7cbc-qcw6b"
	I0915 02:04:15.791804       1 event.go:291] "Event occurred" object="default/mysql" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set mysql-9bbbc5bbb to 1"
	I0915 02:04:15.921820       1 event.go:291] "Event occurred" object="default/mysql-9bbbc5bbb" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: mysql-9bbbc5bbb-fk78l"
	
	* 
	* ==> kube-proxy [5b333d3f35d1] <==
	* I0915 01:59:42.202442       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0915 01:59:42.203875       1 server_others.go:140] Detected node IP 192.168.49.2
	W0915 01:59:42.204472       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
	I0915 01:59:42.598259       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0915 01:59:42.598352       1 server_others.go:212] Using iptables Proxier.
	I0915 01:59:42.598378       1 server_others.go:219] creating dualStackProxier for iptables.
	W0915 01:59:42.598430       1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0915 01:59:42.600054       1 server.go:649] Version: v1.22.1
	I0915 01:59:42.602659       1 config.go:315] Starting service config controller
	I0915 01:59:42.602827       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0915 01:59:42.602860       1 config.go:224] Starting endpoint slice config controller
	I0915 01:59:42.602872       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	E0915 01:59:42.686339       1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"functional-20210915015618-22140.16a4dc4bad36a598", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc04872c3a3dc8ea8, ext:1580136301, loc:(*time.Location)(0x2d81340)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-functional-20210915015618-22140", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"", Na
me:"functional-20210915015618-22140", UID:"functional-20210915015618-22140", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "functional-20210915015618-22140.16a4dc4bad36a598" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
	I0915 01:59:42.703589       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0915 01:59:42.704439       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-proxy [916ec8136bf5] <==
	* E0915 02:01:43.932757       1 node.go:161] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20210915015618-22140": dial tcp 192.168.49.2:8441: connect: connection refused
	E0915 02:01:45.185827       1 node.go:161] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20210915015618-22140": dial tcp 192.168.49.2:8441: connect: connection refused
	E0915 02:01:57.499354       1 node.go:161] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20210915015618-22140": net/http: TLS handshake timeout
	E0915 02:02:06.833024       1 node.go:161] Failed to retrieve node info: nodes "functional-20210915015618-22140" is forbidden: User "system:serviceaccount:kube-system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:node-proxier" not found]
	E0915 02:02:15.378864       1 node.go:161] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20210915015618-22140": dial tcp 192.168.49.2:8441: connect: connection refused
	I0915 02:02:37.130258       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0915 02:02:37.130742       1 server_others.go:140] Detected node IP 192.168.49.2
	W0915 02:02:37.130917       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
	I0915 02:02:37.785925       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0915 02:02:37.786089       1 server_others.go:212] Using iptables Proxier.
	I0915 02:02:37.786130       1 server_others.go:219] creating dualStackProxier for iptables.
	W0915 02:02:37.786203       1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0915 02:02:37.820195       1 server.go:649] Version: v1.22.1
	I0915 02:02:37.833229       1 config.go:315] Starting service config controller
	I0915 02:02:37.833855       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0915 02:02:37.834216       1 config.go:224] Starting endpoint slice config controller
	I0915 02:02:37.834502       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	E0915 02:02:37.938992       1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"functional-20210915015618-22140.16a4dc7479b56c44", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc04872ef718b28ac, ext:55192267601, loc:(*time.Location)(0x2d81340)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-functional-20210915015618-22140", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"", N
ame:"functional-20210915015618-22140", UID:"functional-20210915015618-22140", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "functional-20210915015618-22140.16a4dc7479b56c44" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
	I0915 02:02:38.037986       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0915 02:02:38.038860       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [541a642f582c] <==
	* E0915 01:59:17.733805       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0915 01:59:17.734150       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0915 01:59:17.788888       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0915 01:59:17.789789       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0915 01:59:17.812894       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0915 01:59:18.623434       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0915 01:59:18.736329       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0915 01:59:18.788742       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0915 01:59:18.815941       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0915 01:59:18.897218       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0915 01:59:18.912944       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0915 01:59:18.924942       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0915 01:59:19.025501       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0915 01:59:19.086304       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0915 01:59:19.119711       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0915 01:59:19.205728       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0915 01:59:19.256600       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0915 01:59:19.295420       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0915 01:59:19.391511       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0915 01:59:19.393772       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0915 01:59:20.716861       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0915 01:59:25.204274       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	I0915 02:01:36.033743       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0915 02:01:36.036795       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0915 02:01:36.086550       1 secure_serving.go:311] Stopped listening on 127.0.0.1:10259
	
	* 
	* ==> kube-scheduler [6ed51bf0bd32] <==
	* W0915 02:02:04.816303       1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0915 02:02:04.816322       1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0915 02:02:06.600111       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0915 02:02:06.600181       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0915 02:02:06.600195       1 secure_serving.go:200] Serving securely on 127.0.0.1:10259
	I0915 02:02:06.600344       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0915 02:02:06.701357       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	E0915 02:02:06.701517       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	E0915 02:02:06.701546       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E0915 02:02:06.701921       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	E0915 02:02:06.708361       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	E0915 02:02:06.818626       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0915 02:02:06.818862       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0915 02:02:06.818940       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0915 02:02:06.819214       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0915 02:02:06.819248       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0915 02:02:06.819282       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0915 02:02:06.819301       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	I0915 02:02:06.904061       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	W0915 02:02:08.894377       1 reflector.go:441] k8s.io/client-go/informers/factory.go:134: watch of *v1.PodDisruptionBudget ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
	E0915 02:02:11.492499       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?resourceVersion=569": dial tcp 192.168.49.2:8441: connect: connection refused
	E0915 02:02:14.914247       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?resourceVersion=569": dial tcp 192.168.49.2:8441: connect: connection refused
	E0915 02:02:22.849565       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?resourceVersion=569": dial tcp 192.168.49.2:8441: connect: connection refused
	E0915 02:02:36.528208       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)
	E0915 02:02:36.528329       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2021-09-15 01:56:35 UTC, end at Wed 2021-09-15 02:04:50 UTC. --
	Sep 15 02:02:47 functional-20210915015618-22140 kubelet[6169]: I0915 02:02:47.438818    6169 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-78fcd69978-mjv2b through plugin: invalid network status for"
	Sep 15 02:02:48 functional-20210915015618-22140 kubelet[6169]: I0915 02:02:48.653329    6169 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-78fcd69978-mjv2b through plugin: invalid network status for"
	Sep 15 02:02:58 functional-20210915015618-22140 kubelet[6169]: E0915 02:02:58.218426    6169 fsHandler.go:114] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/docker/overlay2/b75e18fa7578910a1d98f20a725186431200dc40a01a81fcd8929194223328a4/diff" to get inode usage: stat /var/lib/docker/overlay2/b75e18fa7578910a1d98f20a725186431200dc40a01a81fcd8929194223328a4/diff: no such file or directory, extraDiskErr: could not stat "/var/lib/docker/containers/d86a240ab762fd651838fe9734a44b5993a2de4bf0bcf6fc4ff412488c2dc690" to get inode usage: stat /var/lib/docker/containers/d86a240ab762fd651838fe9734a44b5993a2de4bf0bcf6fc4ff412488c2dc690: no such file or directory
	Sep 15 02:02:58 functional-20210915015618-22140 kubelet[6169]: E0915 02:02:58.701974    6169 fsHandler.go:114] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/docker/overlay2/e9385a2d19f0baae5456d6f6205f4798ada42d0e47ed0c59f8ebcb5801f32c55/diff" to get inode usage: stat /var/lib/docker/overlay2/e9385a2d19f0baae5456d6f6205f4798ada42d0e47ed0c59f8ebcb5801f32c55/diff: no such file or directory, extraDiskErr: could not stat "/var/lib/docker/containers/33b8e6081cb2fb049a9b7b0b3ba3875c51d917b77b3483ff138bc43d8d48e2e2" to get inode usage: stat /var/lib/docker/containers/33b8e6081cb2fb049a9b7b0b3ba3875c51d917b77b3483ff138bc43d8d48e2e2: no such file or directory
	Sep 15 02:02:58 functional-20210915015618-22140 kubelet[6169]: I0915 02:02:58.977916    6169 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-78fcd69978-mjv2b through plugin: invalid network status for"
	Sep 15 02:03:20 functional-20210915015618-22140 kubelet[6169]: I0915 02:03:20.833291    6169 topology_manager.go:200] "Topology Admit Handler"
	Sep 15 02:03:20 functional-20210915015618-22140 kubelet[6169]: I0915 02:03:20.999575    6169 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh2lk\" (UniqueName: \"kubernetes.io/projected/d0f03f27-657f-4faa-aca1-20b5cd736976-kube-api-access-xh2lk\") pod \"hello-node-6cbfcd7cbc-qcw6b\" (UID: \"d0f03f27-657f-4faa-aca1-20b5cd736976\") "
	Sep 15 02:03:23 functional-20210915015618-22140 kubelet[6169]: I0915 02:03:23.936025    6169 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="2f53ed503a06204ca64efc99c88b29097dd98af4cfdd8f50c02ecbfb3abf8ca9"
	Sep 15 02:03:23 functional-20210915015618-22140 kubelet[6169]: I0915 02:03:23.940282    6169 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/hello-node-6cbfcd7cbc-qcw6b through plugin: invalid network status for"
	Sep 15 02:03:25 functional-20210915015618-22140 kubelet[6169]: I0915 02:03:25.011738    6169 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/hello-node-6cbfcd7cbc-qcw6b through plugin: invalid network status for"
	Sep 15 02:03:56 functional-20210915015618-22140 kubelet[6169]: I0915 02:03:56.233372    6169 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/hello-node-6cbfcd7cbc-qcw6b through plugin: invalid network status for"
	Sep 15 02:03:57 functional-20210915015618-22140 kubelet[6169]: I0915 02:03:57.461143    6169 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/hello-node-6cbfcd7cbc-qcw6b through plugin: invalid network status for"
	Sep 15 02:04:16 functional-20210915015618-22140 kubelet[6169]: I0915 02:04:16.113207    6169 topology_manager.go:200] "Topology Admit Handler"
	Sep 15 02:04:16 functional-20210915015618-22140 kubelet[6169]: W0915 02:04:16.126517    6169 container.go:586] Failed to update stats for container "/kubepods/burstable/pod574d0def-aef2-463f-94bc-969a7cb0d8d4": /sys/fs/cgroup/cpuset/kubepods/burstable/pod574d0def-aef2-463f-94bc-969a7cb0d8d4/cpuset.cpus found to be empty, continuing to push stats
	Sep 15 02:04:16 functional-20210915015618-22140 kubelet[6169]: I0915 02:04:16.297085    6169 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n9fg\" (UniqueName: \"kubernetes.io/projected/574d0def-aef2-463f-94bc-969a7cb0d8d4-kube-api-access-4n9fg\") pod \"mysql-9bbbc5bbb-fk78l\" (UID: \"574d0def-aef2-463f-94bc-969a7cb0d8d4\") "
	Sep 15 02:04:18 functional-20210915015618-22140 kubelet[6169]: I0915 02:04:18.502090    6169 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/mysql-9bbbc5bbb-fk78l through plugin: invalid network status for"
	Sep 15 02:04:18 functional-20210915015618-22140 kubelet[6169]: I0915 02:04:18.504815    6169 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/mysql-9bbbc5bbb-fk78l through plugin: invalid network status for"
	Sep 15 02:04:18 functional-20210915015618-22140 kubelet[6169]: I0915 02:04:18.545906    6169 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="458fa4cbcbe0d2b0609a5928c64bf56fc9904106b145cb109d63522f9eafed3b"
	Sep 15 02:04:19 functional-20210915015618-22140 kubelet[6169]: I0915 02:04:19.625642    6169 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/mysql-9bbbc5bbb-fk78l through plugin: invalid network status for"
	Sep 15 02:04:23 functional-20210915015618-22140 kubelet[6169]: E0915 02:04:23.770201    6169 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods/burstable/pod574d0def-aef2-463f-94bc-969a7cb0d8d4\": RecentStats: unable to find data in memory cache]"
	Sep 15 02:04:24 functional-20210915015618-22140 kubelet[6169]: I0915 02:04:24.755805    6169 topology_manager.go:200] "Topology Admit Handler"
	Sep 15 02:04:24 functional-20210915015618-22140 kubelet[6169]: I0915 02:04:24.883968    6169 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kbxk\" (UniqueName: \"kubernetes.io/projected/e2437971-9590-42bb-a403-dfa7943d0609-kube-api-access-2kbxk\") pod \"nginx-svc\" (UID: \"e2437971-9590-42bb-a403-dfa7943d0609\") "
	Sep 15 02:04:31 functional-20210915015618-22140 kubelet[6169]: I0915 02:04:31.585446    6169 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="c8a1d10e1e6310ea7a9f2567631c29f4f1f61a5ac4aa7bcb70e7a3a0a3eace3f"
	Sep 15 02:04:31 functional-20210915015618-22140 kubelet[6169]: I0915 02:04:31.617922    6169 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/nginx-svc through plugin: invalid network status for"
	Sep 15 02:04:32 functional-20210915015618-22140 kubelet[6169]: I0915 02:04:32.842186    6169 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/nginx-svc through plugin: invalid network status for"
	
	* 
	* ==> storage-provisioner [2345d4c4037d] <==
	* I0915 02:01:43.641280       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0915 02:01:43.727961       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> storage-provisioner [30f50896bfb0] <==
	* I0915 02:02:45.334723       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0915 02:02:45.496234       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0915 02:02:45.496360       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0915 02:03:03.194821       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0915 02:03:03.195351       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cb37d748-5388-4608-beb0-73cfe0de4815", APIVersion:"v1", ResourceVersion:"678", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-20210915015618-22140_3a199c72-b2e2-405f-be76-a0489a56b8ef became leader
	I0915 02:03:03.196534       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-20210915015618-22140_3a199c72-b2e2-405f-be76-a0489a56b8ef!
	I0915 02:03:03.298610       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-20210915015618-22140_3a199c72-b2e2-405f-be76-a0489a56b8ef!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-20210915015618-22140 -n functional-20210915015618-22140
helpers_test.go:255: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-20210915015618-22140 -n functional-20210915015618-22140: (4.5714282s)
helpers_test.go:262: (dbg) Run:  kubectl --context functional-20210915015618-22140 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: mysql-9bbbc5bbb-fk78l nginx-svc
helpers_test.go:273: ======> post-mortem[TestFunctional/parallel/LoadImageFromFile]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context functional-20210915015618-22140 describe pod mysql-9bbbc5bbb-fk78l nginx-svc
helpers_test.go:281: (dbg) kubectl --context functional-20210915015618-22140 describe pod mysql-9bbbc5bbb-fk78l nginx-svc:

                                                
                                                
-- stdout --
	Name:           mysql-9bbbc5bbb-fk78l
	Namespace:      default
	Priority:       0
	Node:           functional-20210915015618-22140/192.168.49.2
	Start Time:     Wed, 15 Sep 2021 02:04:16 +0000
	Labels:         app=mysql
	                pod-template-hash=9bbbc5bbb
	Annotations:    <none>
	Status:         Pending
	IP:             
	IPs:            <none>
	Controlled By:  ReplicaSet/mysql-9bbbc5bbb
	Containers:
	  mysql:
	    Container ID:   
	    Image:          mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4n9fg (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  kube-api-access-4n9fg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  41s   default-scheduler  Successfully assigned default/mysql-9bbbc5bbb-fk78l to functional-20210915015618-22140
	  Normal  Pulling    39s   kubelet            Pulling image "mysql:5.7"
	
	
	Name:         nginx-svc
	Namespace:    default
	Priority:     0
	Node:         functional-20210915015618-22140/192.168.49.2
	Start Time:   Wed, 15 Sep 2021 02:04:24 +0000
	Labels:       run=nginx-svc
	Annotations:  <none>
	Status:       Pending
	IP:           
	IPs:          <none>
	Containers:
	  nginx:
	    Container ID:   
	    Image:          nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2kbxk (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  kube-api-access-2kbxk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  33s   default-scheduler  Successfully assigned default/nginx-svc to functional-20210915015618-22140
	  Normal  Pulling    26s   kubelet            Pulling image "nginx:alpine"

                                                
                                                
-- /stdout --
helpers_test.go:284: <<< TestFunctional/parallel/LoadImageFromFile FAILED: end of post-mortem logs <<<
helpers_test.go:285: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/LoadImageFromFile (47.55s)

                                                
                                    
x
+
TestScheduledStopWindows (273.67s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:129: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-20210915025901-22140 --memory=2048 --driver=docker
E0915 02:59:45.223321   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915013001-22140\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:129: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-20210915025901-22140 --memory=2048 --driver=docker: (3m36.0792792s)
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-20210915025901-22140 --schedule 5m
scheduled_stop_test.go:138: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-20210915025901-22140 --schedule 5m: (7.158367s)
scheduled_stop_test.go:192: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-20210915025901-22140 -n scheduled-stop-20210915025901-22140
scheduled_stop_test.go:192: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-20210915025901-22140 -n scheduled-stop-20210915025901-22140: (5.9532051s)
scheduled_stop_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-20210915025901-22140 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:56: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-20210915025901-22140 -- sudo systemctl show minikube-scheduled-stop --no-page: (4.7994849s)
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-20210915025901-22140 --schedule 5s
scheduled_stop_test.go:138: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-20210915025901-22140 --schedule 5s: (3.2670469s)
E0915 03:03:04.279584   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915015618-22140\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:206: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-20210915025901-22140
scheduled_stop_test.go:206: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-20210915025901-22140: exit status 3 (5.3502297s)

                                                
                                                
-- stdout --
	scheduled-stop-20210915025901-22140
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Executing "docker container inspect scheduled-stop-20210915025901-22140 --format={{.State.Status}}" took an unusually long time: 2.1145706s
	* Restarting the docker service may improve performance.
	E0915 03:03:18.621356   12868 status.go:374] failed to get storage capacity of /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	E0915 03:03:18.621356   12868 status.go:258] status error: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port

                                                
                                                
** /stderr **
scheduled_stop_test.go:210: minikube status: exit status 3

                                                
                                                
-- stdout --
	scheduled-stop-20210915025901-22140
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Executing "docker container inspect scheduled-stop-20210915025901-22140 --format={{.State.Status}}" took an unusually long time: 2.1145706s
	* Restarting the docker service may improve performance.
	E0915 03:03:18.621356   12868 status.go:374] failed to get storage capacity of /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	E0915 03:03:18.621356   12868 status.go:258] status error: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port

                                                
                                                
** /stderr **
panic.go:642: *** TestScheduledStopWindows FAILED at 2021-09-15 03:03:18.6509981 +0000 GMT m=+5700.467694001
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestScheduledStopWindows]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect scheduled-stop-20210915025901-22140
helpers_test.go:236: (dbg) docker inspect scheduled-stop-20210915025901-22140:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "749bf9cf21aa4b3c335378d1f010f4aff8d2648b4b539aa4744fe2479b813a1e",
	        "Created": "2021-09-15T02:59:15.221339Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 130,
	            "Error": "",
	            "StartedAt": "2021-09-15T02:59:16.9996653Z",
	            "FinishedAt": "2021-09-15T03:03:15.8820936Z"
	        },
	        "Image": "sha256:83b5a81388468b1ffcd3874b4f24c1406c63c33ac07797cc8bed6ad0207d36a8",
	        "ResolvConfPath": "/var/lib/docker/containers/749bf9cf21aa4b3c335378d1f010f4aff8d2648b4b539aa4744fe2479b813a1e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/749bf9cf21aa4b3c335378d1f010f4aff8d2648b4b539aa4744fe2479b813a1e/hostname",
	        "HostsPath": "/var/lib/docker/containers/749bf9cf21aa4b3c335378d1f010f4aff8d2648b4b539aa4744fe2479b813a1e/hosts",
	        "LogPath": "/var/lib/docker/containers/749bf9cf21aa4b3c335378d1f010f4aff8d2648b4b539aa4744fe2479b813a1e/749bf9cf21aa4b3c335378d1f010f4aff8d2648b4b539aa4744fe2479b813a1e-json.log",
	        "Name": "/scheduled-stop-20210915025901-22140",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "scheduled-stop-20210915025901-22140:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "scheduled-stop-20210915025901-22140",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c5de7432d288be6d5d175ac2a95ca0afe5506e1f5e33f9ee3eb7217c22ff7a2a-init/diff:/var/lib/docker/overlay2/81b5ed92bfb1e2a2a0e307c706b587bea810390dd4cdeffdaab53cb2bea532a6/diff:/var/lib/docker/overlay2/9560b70ae747eb38506ca99f7bdf1b19d69a399aa855bf6d066d5631b126dae0/diff:/var/lib/docker/overlay2/695fbfd66132a632f9cf21a1dbf1c4585ecf3d79d4ec664dc7322dbe57733e22/diff:/var/lib/docker/overlay2/db1f669858e6abde6d71803adf0e4dab516d446780d5e6b1fa82ed6e2c992d39/diff:/var/lib/docker/overlay2/fab89974c291c465525b131b7fd3c3d267c0435e58b67e536b1f5e99b0fe3552/diff:/var/lib/docker/overlay2/7d5946148c5ebf869abcd61af8cbd81254b96679a59bff1399fa76d06f970a03/diff:/var/lib/docker/overlay2/ac34ffb8ff292d487d8e0007c602732cac31fc43cc9dd73014f4f7f6731002e4/diff:/var/lib/docker/overlay2/c79772dfc8b60a34db55f8f7bdd7eb21bdb2ae1ebae9e19320eb82d243476de1/diff:/var/lib/docker/overlay2/5f0227571cb11adf4a20233b21288f6215d7ee4baa55da18a29c55f255c3f91b/diff:/var/lib/docker/overlay2/8f8a0a
55c9a3d7643b70fafbe1d581deef7a9142bb7504cade2efea33d17c8b6/diff:/var/lib/docker/overlay2/855d9e351347b1bfa0c8fcdd68ca509489970443ce6ac3f078a84319bbdbb0de/diff:/var/lib/docker/overlay2/d6da6485052539019c636fe8ca30537f92704bc855db6bb09a9228e17d5e5ee1/diff:/var/lib/docker/overlay2/3a712bb22c438ea19740b4d19771cd31cbd08e2f23647daf15e09967798d671d/diff:/var/lib/docker/overlay2/e8f4cc7b40bc0b3a9e62ea0d4f5ca169aab3e908980e13c881a98909769e05a7/diff:/var/lib/docker/overlay2/7364b0516116b13f8d51a574ea9312cc8be87bf0923e8ebe0018085133e57195/diff:/var/lib/docker/overlay2/10d8c9ca18bc3463470c25ce09aa92dc1df0366115c9fd5a22e67d1369e27b72/diff:/var/lib/docker/overlay2/e8ad5dbce212f833465ffdc136c8c744beb3bfe489d7f20f82084f854ab617cd/diff:/var/lib/docker/overlay2/391d7b820cdbb31a7bcc9bd350aff08e83bc2f5083fa09d2d7c1db69d1861b08/diff:/var/lib/docker/overlay2/394198ca9ba772f189cefae2c09414df3798734482a0159958ad4c74374079e8/diff:/var/lib/docker/overlay2/c3620c3c820e1cc79a02390c9ede0beacdc7fe42aa0e9564d27d6c793741eafe/diff:/var/lib/d
ocker/overlay2/9b11f1c010dca16f2c216392f2d3c5ec585e7d2ca91eb0a4824410accaba4ef3/diff:/var/lib/docker/overlay2/d8e94cabdfcf34c1c2ecb5355519daea41ba85e90131944f14c6c5faadb3f538/diff:/var/lib/docker/overlay2/335c17cc3e6bcc49659f681fefa84f63f496fab770f62dd31577690f8e3958b6/diff:/var/lib/docker/overlay2/5ef44871aef3ad96e532fdbc78e5379afd65c7ffd39bed734ed35daf134257b5/diff:/var/lib/docker/overlay2/ce73bde16589364238c0bb925bbd93f9b2b9c5e2f3267cc196298f62fbc08342/diff:/var/lib/docker/overlay2/461113b8bc693d226593885e543b82eac9a75ea77d0bcdaa60551cca12495538/diff:/var/lib/docker/overlay2/f7d47793cf5882d3e0b92ebb0d7d2456fc621d6db83cb2439f96c4b248b11d25/diff:/var/lib/docker/overlay2/a8e74e4377f38c1a50d9a335bfc92405a4df112abdcbd2555cbe3b592f071fd5/diff:/var/lib/docker/overlay2/405812e0a303b666cd7c1c0102d8f415494b9641e1f5ab9404e146c2265592cb/diff:/var/lib/docker/overlay2/deecfc978d174b5d2c0a209b450d0fa15828234099690cc9092c6ff67a1926d2/diff:/var/lib/docker/overlay2/6fa41c9e75c99fb82729fdd55e5653ce5b7edf256a1dd8791c3012cf210
7f486/diff:/var/lib/docker/overlay2/2dd2dde99da44abd645912f40fdb7d06e201a622cccf049222fa9a53ab6ca234/diff:/var/lib/docker/overlay2/a73187a91c6737ec4627be55f4b58dab9d4ef30412857cbf1cd6e6778962c9f4/diff:/var/lib/docker/overlay2/7fcd2796c0a1717ddf6c90aad88aff2e11a87b836d8761e756b6bc7a292ed570/diff:/var/lib/docker/overlay2/276597df229fc32d0d371563f135664fa4bef3fbc20372998b7b051504e6188a/diff:/var/lib/docker/overlay2/28f6cf4ea77b5f1df2373079b5b3c9b2ec7e95488cec51c54e7ff22f8fea2f36/diff:/var/lib/docker/overlay2/301627855ef95ac8b04f9b404290e80b6a94b9637ec2ca0c31b5701c6ac786fd/diff:/var/lib/docker/overlay2/a589a72c723642d2bb727fead8edfcaffaca10eed1bb4af32fac19fb6fc32874/diff:/var/lib/docker/overlay2/90d1c9e6fe8a1c74ac53d78f9a0b7ee36fc624becac59c2a6056c004ebe45e1e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c5de7432d288be6d5d175ac2a95ca0afe5506e1f5e33f9ee3eb7217c22ff7a2a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c5de7432d288be6d5d175ac2a95ca0afe5506e1f5e33f9ee3eb7217c22ff7a2a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c5de7432d288be6d5d175ac2a95ca0afe5506e1f5e33f9ee3eb7217c22ff7a2a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "scheduled-stop-20210915025901-22140",
	                "Source": "/var/lib/docker/volumes/scheduled-stop-20210915025901-22140/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "scheduled-stop-20210915025901-22140",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "scheduled-stop-20210915025901-22140",
	                "name.minikube.sigs.k8s.io": "scheduled-stop-20210915025901-22140",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "75404d2ffee288b821c880a500814a646ae1e3e103b55f3e0aa9b452d40c3ae1",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {},
	            "SandboxKey": "/var/run/docker/netns/75404d2ffee2",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "scheduled-stop-20210915025901-22140": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "749bf9cf21aa",
	                        "scheduled-stop-20210915025901-22140"
	                    ],
	                    "NetworkID": "e10bfe361ee04bdf870e7af79180268c5ea7439cfd8240b07662cae2a8fee8d4",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-20210915025901-22140 -n scheduled-stop-20210915025901-22140
E0915 03:03:21.204339   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915015618-22140\client.crt: The system cannot find the path specified.
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-20210915025901-22140 -n scheduled-stop-20210915025901-22140: exit status 7 (2.4809308s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "scheduled-stop-20210915025901-22140" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:176: Cleaning up "scheduled-stop-20210915025901-22140" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-20210915025901-22140
E0915 03:03:22.134944   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915013001-22140\client.crt: The system cannot find the path specified.
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-20210915025901-22140: (12.9265378s)
--- FAIL: TestScheduledStopWindows (273.67s)

                                                
                                    
x
+
TestInsufficientStorage (51.23s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p insufficient-storage-20210915030852-22140 --memory=2048 --output=json --wait=true --driver=docker
status_test.go:51: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p insufficient-storage-20210915030852-22140 --memory=2048 --output=json --wait=true --driver=docker: exit status 26 (32.5534253s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"17a67b03-5b2f-4f9d-9842-17d8b6f4c897","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-20210915030852-22140] minikube v1.23.0 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3eb9206f-f3f4-4204-a90a-27ef1bcccf60","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"14899d75-01a9-4b71-ae41-0801ca13a6c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"b5f2de3f-9141-4c26-bec1-074ef5686699","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=12425"}}
	{"specversion":"1.0","id":"9c281ba3-141f-466f-babf-915214905c43","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"6931f495-7d75-41be-80dc-44b3e52c4bfc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"39b46ba0-d14e-4532-9a8e-ddaf5b5c3a71","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20210915030852-22140 in cluster insufficient-storage-20210915030852-22140","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"a806e95f-ecd3-4a64-9d36-e9a48fbf3247","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"4a98eb33-603d-4cda-8d0a-fa7ac3639598","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"1531cd7a-99c8-47e1-921c-c752880ada8a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity)","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:77: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-20210915030852-22140 --output=json --layout=cluster
status_test.go:77: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-20210915030852-22140 --output=json --layout=cluster: exit status 7 (3.3979508s)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20210915030852-22140","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.23.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20210915030852-22140","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0915 03:09:28.724182     344 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20210915030852-22140" does not appear in C:\Users\jenkins\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
status_test.go:77: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-20210915030852-22140 --output=json --layout=cluster
status_test.go:77: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-20210915030852-22140 --output=json --layout=cluster: exit status 7 (4.7507033s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"bc2f81f0-8690-4413-a7f4-534ed044b0cb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"Executing \"docker container inspect insufficient-storage-20210915030852-22140 --format={{.State.Status}}\" took an unusually long time: 2.0212702s"}}
	{"specversion":"1.0","id":"acabd00e-8622-4e5f-b376-b31b5546d414","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Restarting the docker service may improve performance."}}
	{"Name":"insufficient-storage-20210915030852-22140","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.23.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20210915030852-22140","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0915 03:09:33.483336   20136 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20210915030852-22140" does not appear in C:\Users\jenkins\minikube-integration\kubeconfig
	E0915 03:09:33.567452   20136 status.go:557] unable to read event log: stat: CreateFile C:\Users\jenkins\minikube-integration\.minikube\profiles\insufficient-storage-20210915030852-22140\events.json: The system cannot find the file specified.

                                                
                                                
** /stderr **
status_test.go:88: unmarshalling: invalid character '{' after top-level value
helpers_test.go:176: Cleaning up "insufficient-storage-20210915030852-22140" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p insufficient-storage-20210915030852-22140
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p insufficient-storage-20210915030852-22140: (10.5171488s)
--- FAIL: TestInsufficientStorage (51.23s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (44.99s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:77: (dbg) Run:  out/minikube-windows-amd64.exe status -p pause-20210915030944-22140 --output=json --layout=cluster
status_test.go:77: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p pause-20210915030944-22140 --output=json --layout=cluster: exit status 2 (8.6339732s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3e98e1a4-5882-4f81-80bc-7dee2aabad4e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"Executing \"docker container inspect pause-20210915030944-22140 --format={{.State.Status}}\" took an unusually long time: 2.3890142s"}}
	{"specversion":"1.0","id":"ae323ddc-4bfe-435a-bcac-9fbd5e18354c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Restarting the docker service may improve performance."}}
	{"Name":"pause-20210915030944-22140","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 13 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.23.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20210915030944-22140","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
pause_test.go:187: unmarshalling: invalid character '{' after top-level value
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestPause/serial/VerifyStatus]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect pause-20210915030944-22140
helpers_test.go:236: (dbg) docker inspect pause-20210915030944-22140:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e485399e65f18dd556023f0f68476a7039ddb5a092e9345973c7a0522e1e84eb",
	        "Created": "2021-09-15T03:10:12.8998703Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 124074,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-09-15T03:10:17.1935737Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:83b5a81388468b1ffcd3874b4f24c1406c63c33ac07797cc8bed6ad0207d36a8",
	        "ResolvConfPath": "/var/lib/docker/containers/e485399e65f18dd556023f0f68476a7039ddb5a092e9345973c7a0522e1e84eb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e485399e65f18dd556023f0f68476a7039ddb5a092e9345973c7a0522e1e84eb/hostname",
	        "HostsPath": "/var/lib/docker/containers/e485399e65f18dd556023f0f68476a7039ddb5a092e9345973c7a0522e1e84eb/hosts",
	        "LogPath": "/var/lib/docker/containers/e485399e65f18dd556023f0f68476a7039ddb5a092e9345973c7a0522e1e84eb/e485399e65f18dd556023f0f68476a7039ddb5a092e9345973c7a0522e1e84eb-json.log",
	        "Name": "/pause-20210915030944-22140",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-20210915030944-22140:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-20210915030944-22140",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/191f9a427455ca5151bc1cb50762ff819154c61a97f8ee85b3497791d3112c02-init/diff:/var/lib/docker/overlay2/81b5ed92bfb1e2a2a0e307c706b587bea810390dd4cdeffdaab53cb2bea532a6/diff:/var/lib/docker/overlay2/9560b70ae747eb38506ca99f7bdf1b19d69a399aa855bf6d066d5631b126dae0/diff:/var/lib/docker/overlay2/695fbfd66132a632f9cf21a1dbf1c4585ecf3d79d4ec664dc7322dbe57733e22/diff:/var/lib/docker/overlay2/db1f669858e6abde6d71803adf0e4dab516d446780d5e6b1fa82ed6e2c992d39/diff:/var/lib/docker/overlay2/fab89974c291c465525b131b7fd3c3d267c0435e58b67e536b1f5e99b0fe3552/diff:/var/lib/docker/overlay2/7d5946148c5ebf869abcd61af8cbd81254b96679a59bff1399fa76d06f970a03/diff:/var/lib/docker/overlay2/ac34ffb8ff292d487d8e0007c602732cac31fc43cc9dd73014f4f7f6731002e4/diff:/var/lib/docker/overlay2/c79772dfc8b60a34db55f8f7bdd7eb21bdb2ae1ebae9e19320eb82d243476de1/diff:/var/lib/docker/overlay2/5f0227571cb11adf4a20233b21288f6215d7ee4baa55da18a29c55f255c3f91b/diff:/var/lib/docker/overlay2/8f8a0a
55c9a3d7643b70fafbe1d581deef7a9142bb7504cade2efea33d17c8b6/diff:/var/lib/docker/overlay2/855d9e351347b1bfa0c8fcdd68ca509489970443ce6ac3f078a84319bbdbb0de/diff:/var/lib/docker/overlay2/d6da6485052539019c636fe8ca30537f92704bc855db6bb09a9228e17d5e5ee1/diff:/var/lib/docker/overlay2/3a712bb22c438ea19740b4d19771cd31cbd08e2f23647daf15e09967798d671d/diff:/var/lib/docker/overlay2/e8f4cc7b40bc0b3a9e62ea0d4f5ca169aab3e908980e13c881a98909769e05a7/diff:/var/lib/docker/overlay2/7364b0516116b13f8d51a574ea9312cc8be87bf0923e8ebe0018085133e57195/diff:/var/lib/docker/overlay2/10d8c9ca18bc3463470c25ce09aa92dc1df0366115c9fd5a22e67d1369e27b72/diff:/var/lib/docker/overlay2/e8ad5dbce212f833465ffdc136c8c744beb3bfe489d7f20f82084f854ab617cd/diff:/var/lib/docker/overlay2/391d7b820cdbb31a7bcc9bd350aff08e83bc2f5083fa09d2d7c1db69d1861b08/diff:/var/lib/docker/overlay2/394198ca9ba772f189cefae2c09414df3798734482a0159958ad4c74374079e8/diff:/var/lib/docker/overlay2/c3620c3c820e1cc79a02390c9ede0beacdc7fe42aa0e9564d27d6c793741eafe/diff:/var/lib/d
ocker/overlay2/9b11f1c010dca16f2c216392f2d3c5ec585e7d2ca91eb0a4824410accaba4ef3/diff:/var/lib/docker/overlay2/d8e94cabdfcf34c1c2ecb5355519daea41ba85e90131944f14c6c5faadb3f538/diff:/var/lib/docker/overlay2/335c17cc3e6bcc49659f681fefa84f63f496fab770f62dd31577690f8e3958b6/diff:/var/lib/docker/overlay2/5ef44871aef3ad96e532fdbc78e5379afd65c7ffd39bed734ed35daf134257b5/diff:/var/lib/docker/overlay2/ce73bde16589364238c0bb925bbd93f9b2b9c5e2f3267cc196298f62fbc08342/diff:/var/lib/docker/overlay2/461113b8bc693d226593885e543b82eac9a75ea77d0bcdaa60551cca12495538/diff:/var/lib/docker/overlay2/f7d47793cf5882d3e0b92ebb0d7d2456fc621d6db83cb2439f96c4b248b11d25/diff:/var/lib/docker/overlay2/a8e74e4377f38c1a50d9a335bfc92405a4df112abdcbd2555cbe3b592f071fd5/diff:/var/lib/docker/overlay2/405812e0a303b666cd7c1c0102d8f415494b9641e1f5ab9404e146c2265592cb/diff:/var/lib/docker/overlay2/deecfc978d174b5d2c0a209b450d0fa15828234099690cc9092c6ff67a1926d2/diff:/var/lib/docker/overlay2/6fa41c9e75c99fb82729fdd55e5653ce5b7edf256a1dd8791c3012cf210
7f486/diff:/var/lib/docker/overlay2/2dd2dde99da44abd645912f40fdb7d06e201a622cccf049222fa9a53ab6ca234/diff:/var/lib/docker/overlay2/a73187a91c6737ec4627be55f4b58dab9d4ef30412857cbf1cd6e6778962c9f4/diff:/var/lib/docker/overlay2/7fcd2796c0a1717ddf6c90aad88aff2e11a87b836d8761e756b6bc7a292ed570/diff:/var/lib/docker/overlay2/276597df229fc32d0d371563f135664fa4bef3fbc20372998b7b051504e6188a/diff:/var/lib/docker/overlay2/28f6cf4ea77b5f1df2373079b5b3c9b2ec7e95488cec51c54e7ff22f8fea2f36/diff:/var/lib/docker/overlay2/301627855ef95ac8b04f9b404290e80b6a94b9637ec2ca0c31b5701c6ac786fd/diff:/var/lib/docker/overlay2/a589a72c723642d2bb727fead8edfcaffaca10eed1bb4af32fac19fb6fc32874/diff:/var/lib/docker/overlay2/90d1c9e6fe8a1c74ac53d78f9a0b7ee36fc624becac59c2a6056c004ebe45e1e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/191f9a427455ca5151bc1cb50762ff819154c61a97f8ee85b3497791d3112c02/merged",
	                "UpperDir": "/var/lib/docker/overlay2/191f9a427455ca5151bc1cb50762ff819154c61a97f8ee85b3497791d3112c02/diff",
	                "WorkDir": "/var/lib/docker/overlay2/191f9a427455ca5151bc1cb50762ff819154c61a97f8ee85b3497791d3112c02/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-20210915030944-22140",
	                "Source": "/var/lib/docker/volumes/pause-20210915030944-22140/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-20210915030944-22140",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-20210915030944-22140",
	                "name.minikube.sigs.k8s.io": "pause-20210915030944-22140",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e6fda170d73e244bd6aad9a26e46ef371ebb8ce6164861df6cd909a40fd3abd0",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58455"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58456"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58452"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58453"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58454"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e6fda170d73e",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-20210915030944-22140": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "e485399e65f1",
	                        "pause-20210915030944-22140"
	                    ],
	                    "NetworkID": "2fd93bbee36130a1ee184cebd0d9aa1d8a4b662381088e940005af0feac8e13a",
	                    "EndpointID": "ad3bf326aa3567be8405b8ecd24754e80fa72e45792f36c5a0917136cd4daf9e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-20210915030944-22140 -n pause-20210915030944-22140
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-20210915030944-22140 -n pause-20210915030944-22140: exit status 2 (7.0795243s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	! Executing "docker container inspect pause-20210915030944-22140 --format={{.State.Status}}" took an unusually long time: 2.4550888s
	* Restarting the docker service may improve performance.

                                                
                                                
** /stderr **
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestPause/serial/VerifyStatus FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestPause/serial/VerifyStatus]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p pause-20210915030944-22140 logs -n 25

                                                
                                                
=== CONT  TestPause/serial/VerifyStatus
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p pause-20210915030944-22140 logs -n 25: exit status 110 (27.4758208s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |------------|-------------------------------------------|-------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	|  Command   |                   Args                    |                  Profile                  |          User           | Version |          Start Time           |           End Time            |
	|------------|-------------------------------------------|-------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	| -p         | multinode-20210915022405-22140            | multinode-20210915022405-22140            | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 02:41:16 GMT | Wed, 15 Sep 2021 02:41:40 GMT |
	|            | node delete m03                           |                                           |                         |         |                               |                               |
	| -p         | multinode-20210915022405-22140            | multinode-20210915022405-22140            | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 02:41:49 GMT | Wed, 15 Sep 2021 02:42:21 GMT |
	|            | stop                                      |                                           |                         |         |                               |                               |
	| start      | -p                                        | multinode-20210915022405-22140            | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 02:42:28 GMT | Wed, 15 Sep 2021 02:45:50 GMT |
	|            | multinode-20210915022405-22140            |                                           |                         |         |                               |                               |
	|            | --wait=true -v=8                          |                                           |                         |         |                               |                               |
	|            | --alsologtostderr                         |                                           |                         |         |                               |                               |
	|            | --driver=docker                           |                                           |                         |         |                               |                               |
	| start      | -p                                        | multinode-20210915022405-22140-m03        | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 02:45:59 GMT | Wed, 15 Sep 2021 02:49:47 GMT |
	|            | multinode-20210915022405-22140-m03        |                                           |                         |         |                               |                               |
	|            | --driver=docker                           |                                           |                         |         |                               |                               |
	| delete     | -p                                        | multinode-20210915022405-22140-m03        | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 02:49:53 GMT | Wed, 15 Sep 2021 02:50:12 GMT |
	|            | multinode-20210915022405-22140-m03        |                                           |                         |         |                               |                               |
	| delete     | -p                                        | multinode-20210915022405-22140            | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 02:50:12 GMT | Wed, 15 Sep 2021 02:50:42 GMT |
	|            | multinode-20210915022405-22140            |                                           |                         |         |                               |                               |
	| start      | -p                                        | test-preload-20210915025042-22140         | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 02:50:43 GMT | Wed, 15 Sep 2021 02:55:04 GMT |
	|            | test-preload-20210915025042-22140         |                                           |                         |         |                               |                               |
	|            | --memory=2200 --alsologtostderr           |                                           |                         |         |                               |                               |
	|            | --wait=true --preload=false               |                                           |                         |         |                               |                               |
	|            | --driver=docker                           |                                           |                         |         |                               |                               |
	|            | --kubernetes-version=v1.17.0              |                                           |                         |         |                               |                               |
	| ssh        | -p                                        | test-preload-20210915025042-22140         | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 02:55:05 GMT | Wed, 15 Sep 2021 02:55:12 GMT |
	|            | test-preload-20210915025042-22140         |                                           |                         |         |                               |                               |
	|            | -- docker pull busybox                    |                                           |                         |         |                               |                               |
	| start      | -p                                        | test-preload-20210915025042-22140         | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 02:55:12 GMT | Wed, 15 Sep 2021 02:58:39 GMT |
	|            | test-preload-20210915025042-22140         |                                           |                         |         |                               |                               |
	|            | --memory=2200 --alsologtostderr           |                                           |                         |         |                               |                               |
	|            | -v=1 --wait=true --driver=docker          |                                           |                         |         |                               |                               |
	|            | --kubernetes-version=v1.17.3              |                                           |                         |         |                               |                               |
	| ssh        | -p                                        | test-preload-20210915025042-22140         | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 02:58:40 GMT | Wed, 15 Sep 2021 02:58:43 GMT |
	|            | test-preload-20210915025042-22140         |                                           |                         |         |                               |                               |
	|            | -- docker images                          |                                           |                         |         |                               |                               |
	| delete     | -p                                        | test-preload-20210915025042-22140         | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 02:58:44 GMT | Wed, 15 Sep 2021 02:59:00 GMT |
	|            | test-preload-20210915025042-22140         |                                           |                         |         |                               |                               |
	| start      | -p                                        | scheduled-stop-20210915025901-22140       | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 02:59:01 GMT | Wed, 15 Sep 2021 03:02:37 GMT |
	|            | scheduled-stop-20210915025901-22140       |                                           |                         |         |                               |                               |
	|            | --memory=2048 --driver=docker             |                                           |                         |         |                               |                               |
	| stop       | -p                                        | scheduled-stop-20210915025901-22140       | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 03:02:37 GMT | Wed, 15 Sep 2021 03:02:44 GMT |
	|            | scheduled-stop-20210915025901-22140       |                                           |                         |         |                               |                               |
	|            | --schedule 5m                             |                                           |                         |         |                               |                               |
	| ssh        | -p                                        | scheduled-stop-20210915025901-22140       | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 03:02:50 GMT | Wed, 15 Sep 2021 03:02:54 GMT |
	|            | scheduled-stop-20210915025901-22140       |                                           |                         |         |                               |                               |
	|            | -- sudo systemctl show                    |                                           |                         |         |                               |                               |
	|            | minikube-scheduled-stop --no-page         |                                           |                         |         |                               |                               |
	| stop       | -p                                        | scheduled-stop-20210915025901-22140       | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 03:02:55 GMT | Wed, 15 Sep 2021 03:02:58 GMT |
	|            | scheduled-stop-20210915025901-22140       |                                           |                         |         |                               |                               |
	|            | --schedule 5s                             |                                           |                         |         |                               |                               |
	| delete     | -p                                        | scheduled-stop-20210915025901-22140       | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 03:03:22 GMT | Wed, 15 Sep 2021 03:03:34 GMT |
	|            | scheduled-stop-20210915025901-22140       |                                           |                         |         |                               |                               |
	| start      | -p                                        | skaffold-20210915030334-22140             | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 03:03:36 GMT | Wed, 15 Sep 2021 03:06:56 GMT |
	|            | skaffold-20210915030334-22140             |                                           |                         |         |                               |                               |
	|            | --memory=2600 --driver=docker             |                                           |                         |         |                               |                               |
	| docker-env | --shell none -p                           | skaffold-20210915030334-22140             | skaffold                | v1.23.0 | Wed, 15 Sep 2021 03:06:58 GMT | Wed, 15 Sep 2021 03:07:05 GMT |
	|            | skaffold-20210915030334-22140             |                                           |                         |         |                               |                               |
	|            | --user=skaffold                           |                                           |                         |         |                               |                               |
	| delete     | -p                                        | skaffold-20210915030334-22140             | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 03:08:34 GMT | Wed, 15 Sep 2021 03:08:52 GMT |
	|            | skaffold-20210915030334-22140             |                                           |                         |         |                               |                               |
	| delete     | -p                                        | insufficient-storage-20210915030852-22140 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 03:09:33 GMT | Wed, 15 Sep 2021 03:09:44 GMT |
	|            | insufficient-storage-20210915030852-22140 |                                           |                         |         |                               |                               |
	| start      | -p pause-20210915030944-22140             | pause-20210915030944-22140                | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 03:09:44 GMT | Wed, 15 Sep 2021 03:20:08 GMT |
	|            | --memory=2048                             |                                           |                         |         |                               |                               |
	|            | --install-addons=false                    |                                           |                         |         |                               |                               |
	|            | --wait=all --driver=docker                |                                           |                         |         |                               |                               |
	| start      | -p                                        | offline-docker-20210915030944-22140       | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 03:09:44 GMT | Wed, 15 Sep 2021 03:20:19 GMT |
	|            | offline-docker-20210915030944-22140       |                                           |                         |         |                               |                               |
	|            | --alsologtostderr -v=1                    |                                           |                         |         |                               |                               |
	|            | --memory=2048 --wait=true                 |                                           |                         |         |                               |                               |
	|            | --driver=docker                           |                                           |                         |         |                               |                               |
	| delete     | -p                                        | offline-docker-20210915030944-22140       | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 03:20:19 GMT | Wed, 15 Sep 2021 03:20:46 GMT |
	|            | offline-docker-20210915030944-22140       |                                           |                         |         |                               |                               |
	| start      | -p pause-20210915030944-22140             | pause-20210915030944-22140                | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 03:20:10 GMT | Wed, 15 Sep 2021 03:21:42 GMT |
	|            | --alsologtostderr -v=1                    |                                           |                         |         |                               |                               |
	|            | --driver=docker                           |                                           |                         |         |                               |                               |
	| pause      | -p pause-20210915030944-22140             | pause-20210915030944-22140                | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 03:21:43 GMT | Wed, 15 Sep 2021 03:22:01 GMT |
	|            | --alsologtostderr -v=5                    |                                           |                         |         |                               |                               |
	|------------|-------------------------------------------|-------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/09/15 03:20:47
	Running on machine: windows-server-1
	Binary: Built with gc go1.17 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0915 03:20:47.510135   53904 out.go:298] Setting OutFile to fd 1436 ...
	I0915 03:20:47.512428   53904 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0915 03:20:47.512428   53904 out.go:311] Setting ErrFile to fd 1636...
	I0915 03:20:47.512666   53904 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0915 03:20:47.543013   53904 out.go:305] Setting JSON to false
	I0915 03:20:47.564315   53904 start.go:111] hostinfo: {"hostname":"windows-server-1","uptime":10278830,"bootTime":1621397217,"procs":158,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9879231f-6171-435d-bab4-5b366cc6391b"}
	W0915 03:20:47.564545   53904 start.go:119] gopshost.Virtualization returned error: not implemented yet
	I0915 03:20:47.569264   53904 out.go:177] * [force-systemd-flag-20210915032047-22140] minikube v1.23.0 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	I0915 03:20:47.569859   53904 notify.go:169] Checking for updates...
	I0915 03:20:47.572643   53904 out.go:177]   - KUBECONFIG=C:\Users\jenkins\minikube-integration\kubeconfig
	I0915 03:20:47.575416   53904 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins\minikube-integration\.minikube
	I0915 03:20:45.612797    7432 ssh_runner.go:192] Completed: sudo systemctl enable docker.socket: (1.1469776s)
	I0915 03:20:45.642526    7432 ssh_runner.go:152] Run: sudo systemctl cat docker.service
	I0915 03:20:45.812157    7432 ssh_runner.go:152] Run: sudo systemctl daemon-reload
	I0915 03:20:46.984512    7432 ssh_runner.go:192] Completed: sudo systemctl daemon-reload: (1.1723595s)
	I0915 03:20:47.002191    7432 ssh_runner.go:152] Run: sudo systemctl start docker
	I0915 03:20:47.157017    7432 ssh_runner.go:152] Run: docker version --format {{.Server.Version}}
	I0915 03:20:47.927468    7432 ssh_runner.go:152] Run: docker version --format {{.Server.Version}}
	I0915 03:20:48.289273    7432 out.go:204] * Preparing Kubernetes v1.22.1 on Docker 20.10.8 ...
	I0915 03:20:48.315759    7432 cli_runner.go:115] Run: docker exec -t pause-20210915030944-22140 dig +short host.docker.internal
	I0915 03:20:49.905848    7432 cli_runner.go:168] Completed: docker exec -t pause-20210915030944-22140 dig +short host.docker.internal: (1.5897052s)
	I0915 03:20:49.905848    7432 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0915 03:20:49.918844    7432 ssh_runner.go:152] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0915 03:20:50.012183    7432 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-20210915030944-22140
	I0915 03:20:47.578328   53904 out.go:177]   - MINIKUBE_LOCATION=12425
	I0915 03:20:47.580811   53904 config.go:177] Loaded profile config "pause-20210915030944-22140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.1
	I0915 03:20:47.581536   53904 config.go:177] Loaded profile config "running-upgrade-20210915030944-22140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0915 03:20:47.582251   53904 config.go:177] Loaded profile config "stopped-upgrade-20210915030944-22140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0915 03:20:47.582465   53904 driver.go:343] Setting default libvirt URI to qemu:///system
	I0915 03:20:49.967631   53904 docker.go:132] docker version: linux-20.10.5
	I0915 03:20:49.979859   53904 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0915 03:20:51.405746   53904 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.4258917s)
	I0915 03:20:51.406747   53904 info.go:263] docker info: {ID:6FWJ:GOIP:3UEJ:4EPV:BN5V:RES7:2PF6:QH5I:B3LP:YJP2:JQEM:LHWQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:73 OomKillDisable:true NGoroutines:76 SystemTime:2021-09-15 03:20:50.7437709 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0915 03:20:51.418154   53904 out.go:177] * Using the docker driver based on user configuration
	I0915 03:20:51.418546   53904 start.go:278] selected driver: docker
	I0915 03:20:51.418546   53904 start.go:751] validating driver "docker" against <nil>
	I0915 03:20:51.418848   53904 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0915 03:20:51.548369   53904 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0915 03:20:52.852639   53904 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.304275s)
	I0915 03:20:52.853604   53904 info.go:263] docker info: {ID:6FWJ:GOIP:3UEJ:4EPV:BN5V:RES7:2PF6:QH5I:B3LP:YJP2:JQEM:LHWQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:72 OomKillDisable:true NGoroutines:73 SystemTime:2021-09-15 03:20:52.3197643 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0915 03:20:52.853839   53904 start_flags.go:264] no existing cluster config was found, will generate one from the flags 
	I0915 03:20:52.854639   53904 start_flags.go:719] Wait components to verify : map[apiserver:true system_pods:true]
	I0915 03:20:52.854779   53904 cni.go:93] Creating CNI manager for ""
	I0915 03:20:52.854960   53904 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0915 03:20:52.854960   53904 start_flags.go:278] config:
	{Name:force-systemd-flag-20210915032047-22140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.1 ClusterName:force-systemd-flag-20210915032047-22140 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0915 03:20:52.857953   53904 out.go:177] * Starting control plane node force-systemd-flag-20210915032047-22140 in cluster force-systemd-flag-20210915032047-22140
	I0915 03:20:52.857953   53904 cache.go:118] Beginning downloading kic base image for docker with docker
	I0915 03:20:52.857953   53904 out.go:177] * Pulling base image ...
	I0915 03:20:52.857953   53904 preload.go:131] Checking if preload exists for k8s version v1.22.1 and runtime docker
	I0915 03:20:52.857953   53904 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 in local docker daemon
	I0915 03:20:52.857953   53904 preload.go:147] Found local preload: C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v12-v1.22.1-docker-overlay2-amd64.tar.lz4
	I0915 03:20:52.857953   53904 cache.go:57] Caching tarball of preloaded images
	I0915 03:20:52.857953   53904 preload.go:173] Found C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v12-v1.22.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0915 03:20:52.857953   53904 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.1 on docker
	I0915 03:20:52.857953   53904 profile.go:148] Saving config to C:\Users\jenkins\minikube-integration\.minikube\profiles\force-systemd-flag-20210915032047-22140\config.json ...
	I0915 03:20:52.863146   53904 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\.minikube\profiles\force-systemd-flag-20210915032047-22140\config.json: {Name:mk1f54b0fe45af33ae0fa2c1953f0aec61fb492f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 03:20:53.601057   53904 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 in local docker daemon, skipping pull
	I0915 03:20:53.601057   53904 cache.go:140] gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 exists in daemon, skipping load
	I0915 03:20:53.601438   53904 cache.go:206] Successfully downloaded all kic artifacts
	I0915 03:20:53.601438   53904 start.go:313] acquiring machines lock for force-systemd-flag-20210915032047-22140: {Name:mkfecce1b856af46ef7349e02453e22f498d92a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 03:20:53.601855   53904 start.go:317] acquired machines lock for "force-systemd-flag-20210915032047-22140" in 197.4µs
	I0915 03:20:53.602266   53904 start.go:89] Provisioning new machine with config: &{Name:force-systemd-flag-20210915032047-22140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.1 ClusterName:force-systemd-flag-20210915032047-22140 Namespac
e:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.1 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.22.1 ControlPlane:true Worker:true}
	I0915 03:20:53.602467   53904 start.go:126] createHost starting for "" (driver="docker")
	I0915 03:20:50.940675    7432 preload.go:131] Checking if preload exists for k8s version v1.22.1 and runtime docker
	I0915 03:20:50.968709    7432 ssh_runner.go:152] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0915 03:20:51.688617    7432 docker.go:558] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.22.1
	k8s.gcr.io/kube-scheduler:v1.22.1
	k8s.gcr.io/kube-controller-manager:v1.22.1
	k8s.gcr.io/kube-proxy:v1.22.1
	k8s.gcr.io/etcd:3.5.0-0
	k8s.gcr.io/coredns/coredns:v1.8.4
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.5
	kubernetesui/dashboard:v2.1.0
	kubernetesui/metrics-scraper:v1.0.4
	
	-- /stdout --
	I0915 03:20:51.688617    7432 docker.go:489] Images already preloaded, skipping extraction
	I0915 03:20:51.704938    7432 ssh_runner.go:152] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0915 03:20:52.394638    7432 docker.go:558] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.22.1
	k8s.gcr.io/kube-proxy:v1.22.1
	k8s.gcr.io/kube-controller-manager:v1.22.1
	k8s.gcr.io/kube-scheduler:v1.22.1
	k8s.gcr.io/etcd:3.5.0-0
	k8s.gcr.io/coredns/coredns:v1.8.4
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.5
	kubernetesui/dashboard:v2.1.0
	kubernetesui/metrics-scraper:v1.0.4
	
	-- /stdout --
	I0915 03:20:52.394907    7432 cache_images.go:78] Images are preloaded, skipping loading
	I0915 03:20:52.410151    7432 ssh_runner.go:152] Run: docker info --format {{.CgroupDriver}}
	I0915 03:20:54.523986    7432 ssh_runner.go:192] Completed: docker info --format {{.CgroupDriver}}: (2.1137356s)
	I0915 03:20:54.524171    7432 cni.go:93] Creating CNI manager for ""
	I0915 03:20:54.524385    7432 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0915 03:20:54.524385    7432 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0915 03:20:54.524385    7432 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.22.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-20210915030944-22140 NodeName:pause-20210915030944-22140 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/mi
nikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0915 03:20:54.524385    7432 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "pause-20210915030944-22140"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.22.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0915 03:20:54.524385    7432 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.22.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=pause-20210915030944-22140 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.22.1 ClusterName:pause-20210915030944-22140 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0915 03:20:54.545786    7432 ssh_runner.go:152] Run: sudo ls /var/lib/minikube/binaries/v1.22.1
	I0915 03:20:54.698387    7432 binaries.go:44] Found k8s binaries, skipping transfer
	I0915 03:20:54.715745    7432 ssh_runner.go:152] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0915 03:20:54.955135    7432 ssh_runner.go:319] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (352 bytes)
	I0915 03:20:53.606380   53904 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0915 03:20:53.606380   53904 start.go:160] libmachine.API.Create for "force-systemd-flag-20210915032047-22140" (driver="docker")
	I0915 03:20:53.606380   53904 client.go:168] LocalClient.Create starting
	I0915 03:20:53.606380   53904 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem
	I0915 03:20:53.606380   53904 main.go:130] libmachine: Decoding PEM data...
	I0915 03:20:53.606380   53904 main.go:130] libmachine: Parsing certificate...
	I0915 03:20:53.606380   53904 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem
	I0915 03:20:53.606380   53904 main.go:130] libmachine: Decoding PEM data...
	I0915 03:20:53.606380   53904 main.go:130] libmachine: Parsing certificate...
	I0915 03:20:53.625144   53904 cli_runner.go:115] Run: docker network inspect force-systemd-flag-20210915032047-22140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0915 03:20:54.370750   53904 cli_runner.go:162] docker network inspect force-systemd-flag-20210915032047-22140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0915 03:20:54.382211   53904 network_create.go:255] running [docker network inspect force-systemd-flag-20210915032047-22140] to gather additional debugging logs...
	I0915 03:20:54.382211   53904 cli_runner.go:115] Run: docker network inspect force-systemd-flag-20210915032047-22140
	W0915 03:20:55.145510   53904 cli_runner.go:162] docker network inspect force-systemd-flag-20210915032047-22140 returned with exit code 1
	I0915 03:20:55.145510   53904 network_create.go:258] error running [docker network inspect force-systemd-flag-20210915032047-22140]: docker network inspect force-systemd-flag-20210915032047-22140: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: force-systemd-flag-20210915032047-22140
	I0915 03:20:55.145510   53904 network_create.go:260] output of [docker network inspect force-systemd-flag-20210915032047-22140]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: force-systemd-flag-20210915032047-22140
	
	** /stderr **
	I0915 03:20:55.165543   53904 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0915 03:20:55.973037   53904 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000006628] misses:0}
	I0915 03:20:55.973266   53904 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0915 03:20:55.973422   53904 network_create.go:106] attempt to create docker network force-systemd-flag-20210915032047-22140 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0915 03:20:55.983967   53904 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20210915032047-22140
	W0915 03:20:56.803554   53904 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20210915032047-22140 returned with exit code 1
	W0915 03:20:56.803910   53904 network_create.go:98] failed to create docker network force-systemd-flag-20210915032047-22140 192.168.49.0/24, will retry: subnet is taken
	I0915 03:20:56.835655   53904 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006628] amended:false}} dirty:map[] misses:0}
	I0915 03:20:56.835655   53904 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0915 03:20:56.860302   53904 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006628] amended:true}} dirty:map[192.168.49.0:0xc000006628 192.168.58.0:0xc00070e1d0] misses:0}
	I0915 03:20:56.860502   53904 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0915 03:20:56.860502   53904 network_create.go:106] attempt to create docker network force-systemd-flag-20210915032047-22140 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0915 03:20:56.873424   53904 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20210915032047-22140
	I0915 03:20:55.562622    7432 ssh_runner.go:319] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0915 03:20:55.801667    7432 ssh_runner.go:319] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2069 bytes)
	I0915 03:20:56.182530    7432 ssh_runner.go:152] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0915 03:20:56.239768    7432 certs.go:52] Setting up C:\Users\jenkins\minikube-integration\.minikube\profiles\pause-20210915030944-22140 for IP: 192.168.49.2
	I0915 03:20:56.239768    7432 certs.go:179] skipping minikubeCA CA generation: C:\Users\jenkins\minikube-integration\.minikube\ca.key
	I0915 03:20:56.239768    7432 certs.go:179] skipping proxyClientCA CA generation: C:\Users\jenkins\minikube-integration\.minikube\proxy-client-ca.key
	I0915 03:20:56.239768    7432 certs.go:293] skipping minikube-user signed cert generation: C:\Users\jenkins\minikube-integration\.minikube\profiles\pause-20210915030944-22140\client.key
	I0915 03:20:56.239768    7432 certs.go:293] skipping minikube signed cert generation: C:\Users\jenkins\minikube-integration\.minikube\profiles\pause-20210915030944-22140\apiserver.key.dd3b5fb2
	I0915 03:20:56.239768    7432 certs.go:293] skipping aggregator signed cert generation: C:\Users\jenkins\minikube-integration\.minikube\profiles\pause-20210915030944-22140\proxy-client.key
	I0915 03:20:56.245969    7432 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\22140.pem (1338 bytes)
	W0915 03:20:56.246803    7432 certs.go:372] ignoring C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\22140_empty.pem, impossibly tiny 0 bytes
	I0915 03:20:56.247081    7432 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0915 03:20:56.247492    7432 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0915 03:20:56.248110    7432 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0915 03:20:56.248677    7432 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0915 03:20:56.249124    7432 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\221402.pem (1708 bytes)
	I0915 03:20:56.254347    7432 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\pause-20210915030944-22140\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0915 03:20:56.646661    7432 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\pause-20210915030944-22140\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0915 03:20:56.997229    7432 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\pause-20210915030944-22140\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0915 03:20:57.403134    7432 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\pause-20210915030944-22140\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0915 03:20:57.732599    7432 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0915 03:20:58.168756    7432 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0915 03:20:58.538342    7432 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0915 03:20:58.883400    7432 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I0915 03:20:59.235792    7432 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0915 03:20:59.912391    7432 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\certs\22140.pem --> /usr/share/ca-certificates/22140.pem (1338 bytes)
	I0915 03:20:58.043010   53904 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20210915032047-22140: (1.1695899s)
	I0915 03:20:58.043689   53904 network_create.go:90] docker network force-systemd-flag-20210915032047-22140 192.168.58.0/24 created
	I0915 03:20:58.043689   53904 kic.go:106] calculated static IP "192.168.58.2" for the "force-systemd-flag-20210915032047-22140" container
	I0915 03:20:58.069211   53904 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0915 03:20:58.841063   53904 cli_runner.go:115] Run: docker volume create force-systemd-flag-20210915032047-22140 --label name.minikube.sigs.k8s.io=force-systemd-flag-20210915032047-22140 --label created_by.minikube.sigs.k8s.io=true
	I0915 03:20:59.664093   53904 oci.go:102] Successfully created a docker volume force-systemd-flag-20210915032047-22140
	I0915 03:20:59.674222   53904 cli_runner.go:115] Run: docker run --rm --name force-systemd-flag-20210915032047-22140-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-20210915032047-22140 --entrypoint /usr/bin/test -v force-systemd-flag-20210915032047-22140:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 -d /var/lib
	I0915 03:21:00.776319    7432 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\221402.pem --> /usr/share/ca-certificates/221402.pem (1708 bytes)
	I0915 03:21:01.887324    7432 ssh_runner.go:319] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0915 03:21:02.689597    7432 ssh_runner.go:152] Run: openssl version
	I0915 03:21:02.800584    7432 ssh_runner.go:152] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0915 03:21:03.094587    7432 ssh_runner.go:152] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0915 03:21:03.256682    7432 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Sep 15 01:33 /usr/share/ca-certificates/minikubeCA.pem
	I0915 03:21:03.281485    7432 ssh_runner.go:152] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0915 03:21:03.531328    7432 ssh_runner.go:152] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0915 03:21:03.654550    7432 ssh_runner.go:152] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22140.pem && ln -fs /usr/share/ca-certificates/22140.pem /etc/ssl/certs/22140.pem"
	I0915 03:21:04.071372    7432 ssh_runner.go:152] Run: ls -la /usr/share/ca-certificates/22140.pem
	I0915 03:21:04.317126    7432 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Sep 15 01:56 /usr/share/ca-certificates/22140.pem
	I0915 03:21:04.332669    7432 ssh_runner.go:152] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22140.pem
	I0915 03:21:04.966911    7432 ssh_runner.go:152] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22140.pem /etc/ssl/certs/51391683.0"
	I0915 03:21:04.907686   53904 cli_runner.go:168] Completed: docker run --rm --name force-systemd-flag-20210915032047-22140-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-20210915032047-22140 --entrypoint /usr/bin/test -v force-systemd-flag-20210915032047-22140:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 -d /var/lib: (5.232648s)
	I0915 03:21:04.907832   53904 oci.go:106] Successfully prepared a docker volume force-systemd-flag-20210915032047-22140
	I0915 03:21:04.907832   53904 preload.go:131] Checking if preload exists for k8s version v1.22.1 and runtime docker
	I0915 03:21:04.908271   53904 kic.go:179] Starting extracting preloaded images to volume ...
	I0915 03:21:04.924906   53904 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0915 03:21:04.926883   53904 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v12-v1.22.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-20210915032047-22140:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 -I lz4 -xf /preloaded.tar -C /extractDir
	W0915 03:21:05.905536   53904 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v12-v1.22.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-20210915032047-22140:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I0915 03:21:05.906061   53904 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v12-v1.22.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-20210915032047-22140:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: ������������������System.Exception���	ClassNameMessageDataInnerExceptionHelpURLStackTraceStringRemoteStackTraceStringRemoteStackIndexExceptionMethodHResultSource
WatsonBuckets��)System.Collections.ListDictionaryInternalSystem.Exception���System.Exception���XThe notification platform is unavailable.
	
	The notification platform is unavailable.
		���
	
	����   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)
	   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__6.MoveNext() in C:\workspaces\PR-15387\src\github.com\docker\pinata\win\src\Docker.WPF\PromptShareDirectory.cs:line 53
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__7.MoveNext() in C:\workspaces\PR-15387\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 86
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__5.MoveNext() in C:\workspaces\PR-15387\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 53
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\workspaces\PR-15387\src\github.com\docker\pinata\win\src\Docker.HttpApi\Controllers\FilesharingController.cs:line 21
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()
	��������8
	CreateToastNotifier
	Windows.UI, Version=255.255.255.255, Culture=neutral, PublicKeyToken=null, ContentType=WindowsRuntime
	Windows.UI.Notifications.ToastNotificationManager
	Windows.UI.Notifications.ToastNotifier CreateToastNotifier(System.String)>�����
	���)System.Collections.ListDictionaryInternal���headversioncount��8System.Collections.ListDictionaryInternal+DictionaryNode	������������8System.Collections.ListDictionaryInternal+DictionaryNode���keyvaluenext8System.Collections.ListDictionaryInternal+DictionaryNode	���RestrictedDescription
	���+The notification platform is unavailable.
		������������RestrictedErrorReference
		
���
���������RestrictedCapabilitySid
		������������__RestrictedErrorObject	���	������(System.Exception+__RestrictedErrorObject�������������"__HasRestrictedLanguageErrorObject�.
	See 'docker run --help'.
	I0915 03:21:06.336563   53904 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.4116618s)
	I0915 03:21:06.338462   53904 info.go:263] docker info: {ID:6FWJ:GOIP:3UEJ:4EPV:BN5V:RES7:2PF6:QH5I:B3LP:YJP2:JQEM:LHWQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:73 OomKillDisable:true NGoroutines:83 SystemTime:2021-09-15 03:21:05.7077202 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0915 03:21:06.358418   53904 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0915 03:21:05.285211    7432 ssh_runner.go:152] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221402.pem && ln -fs /usr/share/ca-certificates/221402.pem /etc/ssl/certs/221402.pem"
	I0915 03:21:06.437755    7432 ssh_runner.go:192] Completed: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221402.pem && ln -fs /usr/share/ca-certificates/221402.pem /etc/ssl/certs/221402.pem": (1.1522776s)
	I0915 03:21:06.452620    7432 ssh_runner.go:152] Run: ls -la /usr/share/ca-certificates/221402.pem
	I0915 03:21:06.702968    7432 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Sep 15 01:56 /usr/share/ca-certificates/221402.pem
	I0915 03:21:06.714491    7432 ssh_runner.go:152] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221402.pem
	I0915 03:21:07.030971    7432 ssh_runner.go:152] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221402.pem /etc/ssl/certs/3ec20f2e.0"
	I0915 03:21:07.256744    7432 kubeadm.go:390] StartCluster: {Name:pause-20210915030944-22140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.1 ClusterName:pause-20210915030944-22140 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.1 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0915 03:21:07.273192    7432 ssh_runner.go:152] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0915 03:21:08.050075    7432 ssh_runner.go:152] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0915 03:21:08.165056    7432 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0915 03:21:08.165056    7432 kubeadm.go:600] restartCluster start
	I0915 03:21:08.188349    7432 ssh_runner.go:152] Run: sudo test -d /data/minikube
	I0915 03:21:08.251193    7432 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0915 03:21:08.257677    7432 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-20210915030944-22140
	I0915 03:21:09.061352    7432 kubeconfig.go:93] found "pause-20210915030944-22140" server: "https://127.0.0.1:58454"
	I0915 03:21:09.063698    7432 kapi.go:59] client config for pause-20210915030944-22140: &rest.Config{Host:"https://127.0.0.1:58454", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins\\minikube-integration\\.minikube\\profiles\\pause-20210915030944-22140\\client.crt", KeyFile:"C:\\Users\\jenkins\\minikube-integration\\.minikube\\profiles\\pause-20210915030944-22140\\client.key", CAFile:"C:\\Users\\jenkins\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2369780), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0915 03:21:09.109849    7432 ssh_runner.go:152] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0915 03:21:09.279912    7432 api_server.go:164] Checking apiserver status ...
	I0915 03:21:09.301107    7432 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 03:21:10.291697    7432 ssh_runner.go:152] Run: sudo egrep ^[0-9]+:freezer: /proc/2054/cgroup
	I0915 03:21:10.658480    7432 api_server.go:180] apiserver freezer: "7:freezer:/docker/e485399e65f18dd556023f0f68476a7039ddb5a092e9345973c7a0522e1e84eb/kubepods/burstable/pod763180bf5fbd966512f8b3e939b85eff/62d499a155d8a4797b33acbda488e86b665a52fb872a290e1de1d540a80826f9"
	I0915 03:21:10.674004    7432 ssh_runner.go:152] Run: sudo cat /sys/fs/cgroup/freezer/docker/e485399e65f18dd556023f0f68476a7039ddb5a092e9345973c7a0522e1e84eb/kubepods/burstable/pod763180bf5fbd966512f8b3e939b85eff/62d499a155d8a4797b33acbda488e86b665a52fb872a290e1de1d540a80826f9/freezer.state
	I0915 03:21:11.068866    7432 api_server.go:202] freezer state: "THAWED"
	I0915 03:21:11.068866    7432 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:58454/healthz ...
	I0915 03:21:11.217385    7432 api_server.go:265] https://127.0.0.1:58454/healthz returned 200:
	ok
	I0915 03:21:11.479151    7432 system_pods.go:86] 6 kube-system pods found
	I0915 03:21:11.479151    7432 system_pods.go:89] "coredns-78fcd69978-dm895" [13c9be0b-a7b3-4201-b4f3-b0a0cf66fc3b] Running
	I0915 03:21:11.479151    7432 system_pods.go:89] "etcd-pause-20210915030944-22140" [ef60dfd4-3a16-4fdb-9e94-2d3a90b3f0d2] Running
	I0915 03:21:11.479151    7432 system_pods.go:89] "kube-apiserver-pause-20210915030944-22140" [f800ddc0-e2a2-42af-b67d-8522d21e93a1] Running
	I0915 03:21:11.479151    7432 system_pods.go:89] "kube-controller-manager-pause-20210915030944-22140" [041aa0a4-bb97-49bd-99f0-11b252e48cee] Running
	I0915 03:21:11.479151    7432 system_pods.go:89] "kube-proxy-rqd9p" [f02a5e46-5d3d-458a-ad95-721d55dfbd02] Running
	I0915 03:21:11.479151    7432 system_pods.go:89] "kube-scheduler-pause-20210915030944-22140" [714c20d2-b1a2-4145-8fb9-4ec6feaa6602] Running
	I0915 03:21:11.509058    7432 api_server.go:139] control plane version: v1.22.1
	I0915 03:21:11.509058    7432 kubeadm.go:594] The running cluster does not require reconfiguration: 127.0.0.1
	I0915 03:21:11.509058    7432 kubeadm.go:647] Taking a shortcut, as the cluster seems to be properly configured
	I0915 03:21:11.509058    7432 kubeadm.go:604] restartCluster took 3.3440133s
	I0915 03:21:11.509058    7432 kubeadm.go:392] StartCluster complete in 4.2526233s
	I0915 03:21:11.509058    7432 settings.go:142] acquiring lock: {Name:mk81656fcf8bcddd49caaa1adb1c177165a02100 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 03:21:11.510069    7432 settings.go:150] Updating kubeconfig:  C:\Users\jenkins\minikube-integration\kubeconfig
	I0915 03:21:11.511041    7432 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\kubeconfig: {Name:mk312e0248780fd448f3a83862df8ee597f47373 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 03:21:11.526043    7432 kapi.go:59] client config for pause-20210915030944-22140: &rest.Config{Host:"https://127.0.0.1:58454", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins\\minikube-integration\\.minikube\\profiles\\pause-20210915030944-22140\\client.crt", KeyFile:"C:\\Users\\jenkins\\minikube-integration\\.minikube\\profiles\\pause-20210915030944-22140\\client.key", CAFile:"C:\\Users\\jenkins\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2369780), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0915 03:21:11.609727    7432 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-20210915030944-22140" rescaled to 1
	I0915 03:21:11.610069    7432 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.1 ControlPlane:true Worker:true}
	I0915 03:21:07.614253   53904 cli_runner.go:168] Completed: docker info --format "'{{json .SecurityOptions}}'": (1.2558396s)
	I0915 03:21:07.624510   53904 cli_runner.go:115] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-20210915032047-22140 --name force-systemd-flag-20210915032047-22140 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-20210915032047-22140 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-20210915032047-22140 --network force-systemd-flag-20210915032047-22140 --ip 192.168.58.2 --volume force-systemd-flag-20210915032047-22140:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01a
f56
	I0915 03:21:11.610069    7432 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0915 03:21:11.610475    7432 addons.go:404] enableAddons start: toEnable=map[], additional=[]
	I0915 03:21:11.631216    7432 addons.go:65] Setting storage-provisioner=true in profile "pause-20210915030944-22140"
	I0915 03:21:11.631216    7432 addons.go:153] Setting addon storage-provisioner=true in "pause-20210915030944-22140"
	W0915 03:21:11.631216    7432 addons.go:165] addon storage-provisioner should already be in state true
	I0915 03:21:11.611107    7432 config.go:177] Loaded profile config "pause-20210915030944-22140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.1
	I0915 03:21:11.631216    7432 host.go:66] Checking if "pause-20210915030944-22140" exists ...
	I0915 03:21:11.629952    7432 out.go:177] * Verifying Kubernetes components...
	I0915 03:21:11.631216    7432 addons.go:65] Setting default-storageclass=true in profile "pause-20210915030944-22140"
	I0915 03:21:11.631682    7432 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-20210915030944-22140"
	I0915 03:21:11.647421    7432 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
	I0915 03:21:11.659776    7432 cli_runner.go:115] Run: docker container inspect pause-20210915030944-22140 --format={{.State.Status}}
	I0915 03:21:11.660791    7432 cli_runner.go:115] Run: docker container inspect pause-20210915030944-22140 --format={{.State.Status}}
	I0915 03:21:12.503368    7432 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0915 03:21:12.504346    7432 addons.go:337] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 03:21:12.504346    7432 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0915 03:21:12.512346    7432 kapi.go:59] client config for pause-20210915030944-22140: &rest.Config{Host:"https://127.0.0.1:58454", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins\\minikube-integration\\.minikube\\profiles\\pause-20210915030944-22140\\client.crt", KeyFile:"C:\\Users\\jenkins\\minikube-integration\\.minikube\\profiles\\pause-20210915030944-22140\\client.key", CAFile:"C:\\Users\\jenkins\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2369780), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0915 03:21:12.514346    7432 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210915030944-22140
	I0915 03:21:12.667369    7432 addons.go:153] Setting addon default-storageclass=true in "pause-20210915030944-22140"
	W0915 03:21:12.667369    7432 addons.go:165] addon default-storageclass should already be in state true
	I0915 03:21:12.667369    7432 host.go:66] Checking if "pause-20210915030944-22140" exists ...
	I0915 03:21:12.704497    7432 cli_runner.go:115] Run: docker container inspect pause-20210915030944-22140 --format={{.State.Status}}
	I0915 03:21:13.388146    7432 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58455 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\pause-20210915030944-22140\id_rsa Username:docker}
	I0915 03:21:13.649506    7432 addons.go:337] installing /etc/kubernetes/addons/storageclass.yaml
	I0915 03:21:13.649955    7432 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0915 03:21:13.670489    7432 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210915030944-22140
	I0915 03:21:14.510588    7432 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58455 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\pause-20210915030944-22140\id_rsa Username:docker}
	I0915 03:21:13.924525   53904 cli_runner.go:168] Completed: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-20210915032047-22140 --name force-systemd-flag-20210915032047-22140 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-20210915032047-22140 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-20210915032047-22140 --network force-systemd-flag-20210915032047-22140 --ip 192.168.58.2 --volume force-systemd-flag-20210915032047-22140:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b
0be01af56: (6.2995747s)
	I0915 03:21:13.947238   53904 cli_runner.go:115] Run: docker container inspect force-systemd-flag-20210915032047-22140 --format={{.State.Running}}
	I0915 03:21:14.841727   53904 cli_runner.go:115] Run: docker container inspect force-systemd-flag-20210915032047-22140 --format={{.State.Status}}
	I0915 03:21:15.619675   53904 cli_runner.go:115] Run: docker exec force-systemd-flag-20210915032047-22140 stat /var/lib/dpkg/alternatives/iptables
	I0915 03:21:17.172685   53904 cli_runner.go:168] Completed: docker exec force-systemd-flag-20210915032047-22140 stat /var/lib/dpkg/alternatives/iptables: (1.552019s)
	I0915 03:21:17.173110   53904 oci.go:281] the created container "force-systemd-flag-20210915032047-22140" has a running status.
	I0915 03:21:17.173110   53904 kic.go:210] Creating ssh key for kic: C:\Users\jenkins\minikube-integration\.minikube\machines\force-systemd-flag-20210915032047-22140\id_rsa...
	I0915 03:21:17.349941   53904 vm_assets.go:109] NewFileAsset: C:\Users\jenkins\minikube-integration\.minikube\machines\force-systemd-flag-20210915032047-22140\id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0915 03:21:17.370544   53904 kic_runner.go:188] docker (temp): C:\Users\jenkins\minikube-integration\.minikube\machines\force-systemd-flag-20210915032047-22140\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0915 03:21:19.232009   53904 cli_runner.go:115] Run: docker container inspect force-systemd-flag-20210915032047-22140 --format={{.State.Status}}
	I0915 03:21:20.165942   53904 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0915 03:21:20.166933   53904 kic_runner.go:115] Args: [docker exec --privileged force-systemd-flag-20210915032047-22140 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0915 03:21:21.361108   53904 kic_runner.go:124] Done: [docker exec --privileged force-systemd-flag-20210915032047-22140 chown docker:docker /home/docker/.ssh/authorized_keys]: (1.1941795s)
	I0915 03:21:21.366128   53904 kic.go:250] ensuring only current user has permissions to key file located at : C:\Users\jenkins\minikube-integration\.minikube\machines\force-systemd-flag-20210915032047-22140\id_rsa...
	I0915 03:21:21.694559    7432 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0915 03:21:21.728559    7432 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 03:21:23.020112    7432 ssh_runner.go:192] Completed: sudo systemctl is-active --quiet service kubelet: (11.3727294s)
	I0915 03:21:23.020112    7432 ssh_runner.go:192] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (11.3890773s)
	I0915 03:21:23.028631    7432 start.go:709] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0915 03:21:23.048675    7432 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-20210915030944-22140
	I0915 03:21:23.935302    7432 node_ready.go:35] waiting up to 6m0s for node "pause-20210915030944-22140" to be "Ready" ...
	I0915 03:21:24.280327    7432 node_ready.go:49] node "pause-20210915030944-22140" has status "Ready":"True"
	I0915 03:21:24.280625    7432 node_ready.go:38] duration metric: took 345.0264ms waiting for node "pause-20210915030944-22140" to be "Ready" ...
	I0915 03:21:24.280625    7432 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 03:21:24.462524    7432 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-dm895" in "kube-system" namespace to be "Ready" ...
	I0915 03:21:24.699053    7432 pod_ready.go:92] pod "coredns-78fcd69978-dm895" in "kube-system" namespace has status "Ready":"True"
	I0915 03:21:24.699215    7432 pod_ready.go:81] duration metric: took 236.5297ms waiting for pod "coredns-78fcd69978-dm895" in "kube-system" namespace to be "Ready" ...
	I0915 03:21:24.699215    7432 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-20210915030944-22140" in "kube-system" namespace to be "Ready" ...
	I0915 03:21:24.809921    7432 pod_ready.go:92] pod "etcd-pause-20210915030944-22140" in "kube-system" namespace has status "Ready":"True"
	I0915 03:21:24.809921    7432 pod_ready.go:81] duration metric: took 110.7063ms waiting for pod "etcd-pause-20210915030944-22140" in "kube-system" namespace to be "Ready" ...
	I0915 03:21:24.809921    7432 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-20210915030944-22140" in "kube-system" namespace to be "Ready" ...
	I0915 03:21:25.052201    7432 pod_ready.go:92] pod "kube-apiserver-pause-20210915030944-22140" in "kube-system" namespace has status "Ready":"True"
	I0915 03:21:25.052517    7432 pod_ready.go:81] duration metric: took 242.4716ms waiting for pod "kube-apiserver-pause-20210915030944-22140" in "kube-system" namespace to be "Ready" ...
	I0915 03:21:25.052517    7432 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-20210915030944-22140" in "kube-system" namespace to be "Ready" ...
	I0915 03:21:22.289115   53904 cli_runner.go:115] Run: docker container inspect force-systemd-flag-20210915032047-22140 --format={{.State.Status}}
	I0915 03:21:23.060977   53904 machine.go:88] provisioning docker machine ...
	I0915 03:21:23.060977   53904 ubuntu.go:169] provisioning hostname "force-systemd-flag-20210915032047-22140"
	I0915 03:21:23.070974   53904 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20210915032047-22140
	I0915 03:21:23.998202   53904 main.go:130] libmachine: Using SSH client type: native
	I0915 03:21:23.999229   53904 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x158acc0] 0x158db80 <nil>  [] 0s} 127.0.0.1 58571 <nil> <nil>}
	I0915 03:21:23.999328   53904 main.go:130] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-20210915032047-22140 && echo "force-systemd-flag-20210915032047-22140" | sudo tee /etc/hostname
	I0915 03:21:25.248192   53904 main.go:130] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-20210915032047-22140
	
	I0915 03:21:25.259493   53904 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20210915032047-22140
	I0915 03:21:26.044432   53904 main.go:130] libmachine: Using SSH client type: native
	I0915 03:21:26.045216   53904 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x158acc0] 0x158db80 <nil>  [] 0s} 127.0.0.1 58571 <nil> <nil>}
	I0915 03:21:26.045216   53904 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-20210915032047-22140' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-20210915032047-22140/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-20210915032047-22140' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0915 03:21:26.829767   53904 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0915 03:21:26.830307   53904 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins\minikube-integration\.minikube CaCertPath:C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins\minikube-integration\.minikube}
	I0915 03:21:26.830607   53904 ubuntu.go:177] setting up certificates
	I0915 03:21:26.830775   53904 provision.go:83] configureAuth start
	I0915 03:21:26.844061   53904 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-20210915032047-22140
	I0915 03:21:25.393853    7432 pod_ready.go:92] pod "kube-controller-manager-pause-20210915030944-22140" in "kube-system" namespace has status "Ready":"True"
	I0915 03:21:25.393853    7432 pod_ready.go:81] duration metric: took 341.3377ms waiting for pod "kube-controller-manager-pause-20210915030944-22140" in "kube-system" namespace to be "Ready" ...
	I0915 03:21:25.393853    7432 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rqd9p" in "kube-system" namespace to be "Ready" ...
	I0915 03:21:26.160640    7432 pod_ready.go:92] pod "kube-proxy-rqd9p" in "kube-system" namespace has status "Ready":"True"
	I0915 03:21:26.160766    7432 pod_ready.go:81] duration metric: took 766.9149ms waiting for pod "kube-proxy-rqd9p" in "kube-system" namespace to be "Ready" ...
	I0915 03:21:26.160766    7432 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-20210915030944-22140" in "kube-system" namespace to be "Ready" ...
	I0915 03:21:26.345463    7432 pod_ready.go:92] pod "kube-scheduler-pause-20210915030944-22140" in "kube-system" namespace has status "Ready":"True"
	I0915 03:21:26.345463    7432 pod_ready.go:81] duration metric: took 184.6981ms waiting for pod "kube-scheduler-pause-20210915030944-22140" in "kube-system" namespace to be "Ready" ...
	I0915 03:21:26.345463    7432 pod_ready.go:38] duration metric: took 2.0648456s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 03:21:26.345734    7432 api_server.go:50] waiting for apiserver process to appear ...
	I0915 03:21:26.370975    7432 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 03:21:27.767687   53904 provision.go:138] copyHostCerts
	I0915 03:21:27.768998   53904 vm_assets.go:109] NewFileAsset: C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins\minikube-integration\.minikube/ca.pem
	I0915 03:21:27.770874   53904 exec_runner.go:145] found C:\Users\jenkins\minikube-integration\.minikube/ca.pem, removing ...
	I0915 03:21:27.771624   53904 exec_runner.go:208] rm: C:\Users\jenkins\minikube-integration\.minikube\ca.pem
	I0915 03:21:27.775380   53904 exec_runner.go:152] cp: C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0915 03:21:27.784980   53904 vm_assets.go:109] NewFileAsset: C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins\minikube-integration\.minikube/cert.pem
	I0915 03:21:27.785569   53904 exec_runner.go:145] found C:\Users\jenkins\minikube-integration\.minikube/cert.pem, removing ...
	I0915 03:21:27.786139   53904 exec_runner.go:208] rm: C:\Users\jenkins\minikube-integration\.minikube\cert.pem
	I0915 03:21:27.787421   53904 exec_runner.go:152] cp: C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0915 03:21:27.787907   53904 vm_assets.go:109] NewFileAsset: C:\Users\jenkins\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins\minikube-integration\.minikube/key.pem
	I0915 03:21:27.788907   53904 exec_runner.go:145] found C:\Users\jenkins\minikube-integration\.minikube/key.pem, removing ...
	I0915 03:21:27.788907   53904 exec_runner.go:208] rm: C:\Users\jenkins\minikube-integration\.minikube\key.pem
	I0915 03:21:27.788907   53904 exec_runner.go:152] cp: C:\Users\jenkins\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins\minikube-integration\.minikube/key.pem (1679 bytes)
	I0915 03:21:27.790906   53904 provision.go:112] generating server cert: C:\Users\jenkins\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.force-systemd-flag-20210915032047-22140 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube force-systemd-flag-20210915032047-22140]
	I0915 03:21:28.254639   53904 provision.go:172] copyRemoteCerts
	I0915 03:21:28.268451   53904 ssh_runner.go:152] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0915 03:21:28.279145   53904 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20210915032047-22140
	I0915 03:21:29.034158   53904 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58571 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\force-systemd-flag-20210915032047-22140\id_rsa Username:docker}
	I0915 03:21:29.455307   53904 ssh_runner.go:192] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.1868594s)
	I0915 03:21:29.456559   53904 vm_assets.go:109] NewFileAsset: C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0915 03:21:29.462855   53904 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0915 03:21:29.731960   53904 vm_assets.go:109] NewFileAsset: C:\Users\jenkins\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0915 03:21:29.732778   53904 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1289 bytes)
	I0915 03:21:30.001758   53904 vm_assets.go:109] NewFileAsset: C:\Users\jenkins\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0915 03:21:30.002458   53904 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0915 03:21:30.232211   53904 provision.go:86] duration metric: configureAuth took 3.4014479s
	I0915 03:21:30.232497   53904 ubuntu.go:193] setting minikube options for container-runtime
	I0915 03:21:30.233192   53904 config.go:177] Loaded profile config "force-systemd-flag-20210915032047-22140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.1
	I0915 03:21:30.246521   53904 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20210915032047-22140
	I0915 03:21:31.009899   53904 main.go:130] libmachine: Using SSH client type: native
	I0915 03:21:31.011167   53904 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x158acc0] 0x158db80 <nil>  [] 0s} 127.0.0.1 58571 <nil> <nil>}
	I0915 03:21:31.011167   53904 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0915 03:21:31.497233   53904 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0915 03:21:31.497233   53904 ubuntu.go:71] root file system type: overlay
	I0915 03:21:31.498220   53904 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0915 03:21:31.507224   53904 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20210915032047-22140
	I0915 03:21:32.255119   53904 main.go:130] libmachine: Using SSH client type: native
	I0915 03:21:32.255119   53904 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x158acc0] 0x158db80 <nil>  [] 0s} 127.0.0.1 58571 <nil> <nil>}
	I0915 03:21:32.255119   53904 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0915 03:21:33.192641   53904 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0915 03:21:33.206002   53904 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20210915032047-22140
	I0915 03:21:34.129980   53904 main.go:130] libmachine: Using SSH client type: native
	I0915 03:21:34.130411   53904 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x158acc0] 0x158db80 <nil>  [] 0s} 127.0.0.1 58571 <nil> <nil>}
	I0915 03:21:34.130411   53904 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0915 03:21:39.131593    7432 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (17.436906s)
	I0915 03:21:39.131593    7432 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (17.4030931s)
	I0915 03:21:39.132066    7432 ssh_runner.go:192] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (12.7611352s)
	I0915 03:21:39.132066    7432 api_server.go:70] duration metric: took 27.5218569s to wait for apiserver process to appear ...
	I0915 03:21:39.132066    7432 api_server.go:86] waiting for apiserver healthz status ...
	I0915 03:21:39.132066    7432 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:58454/healthz ...
	I0915 03:21:39.135231    7432 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0915 03:21:39.135231    7432 addons.go:406] enableAddons completed in 27.5250212s
	I0915 03:21:39.245261    7432 api_server.go:265] https://127.0.0.1:58454/healthz returned 200:
	ok
	I0915 03:21:39.449341    7432 api_server.go:139] control plane version: v1.22.1
	I0915 03:21:39.449341    7432 api_server.go:129] duration metric: took 317.2754ms to wait for apiserver health ...
	I0915 03:21:39.449341    7432 system_pods.go:43] waiting for kube-system pods to appear ...
	I0915 03:21:39.651604    7432 system_pods.go:59] 7 kube-system pods found
	I0915 03:21:39.651932    7432 system_pods.go:61] "coredns-78fcd69978-dm895" [13c9be0b-a7b3-4201-b4f3-b0a0cf66fc3b] Running
	I0915 03:21:39.651932    7432 system_pods.go:61] "etcd-pause-20210915030944-22140" [ef60dfd4-3a16-4fdb-9e94-2d3a90b3f0d2] Running
	I0915 03:21:39.651932    7432 system_pods.go:61] "kube-apiserver-pause-20210915030944-22140" [f800ddc0-e2a2-42af-b67d-8522d21e93a1] Running
	I0915 03:21:39.651932    7432 system_pods.go:61] "kube-controller-manager-pause-20210915030944-22140" [041aa0a4-bb97-49bd-99f0-11b252e48cee] Running
	I0915 03:21:39.651932    7432 system_pods.go:61] "kube-proxy-rqd9p" [f02a5e46-5d3d-458a-ad95-721d55dfbd02] Running
	I0915 03:21:39.651932    7432 system_pods.go:61] "kube-scheduler-pause-20210915030944-22140" [714c20d2-b1a2-4145-8fb9-4ec6feaa6602] Running
	I0915 03:21:39.651932    7432 system_pods.go:61] "storage-provisioner" [4520129c-3d86-4b9b-811d-7cba9545d903] Pending
	I0915 03:21:39.651932    7432 system_pods.go:74] duration metric: took 202.5916ms to wait for pod list to return data ...
	I0915 03:21:39.652058    7432 default_sa.go:34] waiting for default service account to be created ...
	I0915 03:21:39.750394    7432 default_sa.go:45] found service account: "default"
	I0915 03:21:39.750621    7432 default_sa.go:55] duration metric: took 98.5639ms for default service account to be created ...
	I0915 03:21:39.750621    7432 system_pods.go:116] waiting for k8s-apps to be running ...
	I0915 03:21:39.887863    7432 system_pods.go:86] 7 kube-system pods found
	I0915 03:21:39.887863    7432 system_pods.go:89] "coredns-78fcd69978-dm895" [13c9be0b-a7b3-4201-b4f3-b0a0cf66fc3b] Running
	I0915 03:21:39.887863    7432 system_pods.go:89] "etcd-pause-20210915030944-22140" [ef60dfd4-3a16-4fdb-9e94-2d3a90b3f0d2] Running
	I0915 03:21:39.887863    7432 system_pods.go:89] "kube-apiserver-pause-20210915030944-22140" [f800ddc0-e2a2-42af-b67d-8522d21e93a1] Running
	I0915 03:21:39.887863    7432 system_pods.go:89] "kube-controller-manager-pause-20210915030944-22140" [041aa0a4-bb97-49bd-99f0-11b252e48cee] Running
	I0915 03:21:39.887863    7432 system_pods.go:89] "kube-proxy-rqd9p" [f02a5e46-5d3d-458a-ad95-721d55dfbd02] Running
	I0915 03:21:39.887863    7432 system_pods.go:89] "kube-scheduler-pause-20210915030944-22140" [714c20d2-b1a2-4145-8fb9-4ec6feaa6602] Running
	I0915 03:21:39.887863    7432 system_pods.go:89] "storage-provisioner" [4520129c-3d86-4b9b-811d-7cba9545d903] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0915 03:21:39.887863    7432 system_pods.go:126] duration metric: took 137.2425ms to wait for k8s-apps to be running ...
	I0915 03:21:39.887863    7432 system_svc.go:44] waiting for kubelet service to be running ....
	I0915 03:21:39.914327    7432 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
	I0915 03:21:40.639894    7432 system_svc.go:56] duration metric: took 752.0331ms WaitForService to wait for kubelet.
	I0915 03:21:40.639894    7432 kubeadm.go:547] duration metric: took 29.0296893s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0915 03:21:40.640896    7432 node_conditions.go:102] verifying NodePressure condition ...
	I0915 03:21:42.452689    7432 node_conditions.go:122] node storage ephemeral capacity is 65792556Ki
	I0915 03:21:42.452689    7432 node_conditions.go:123] node cpu capacity is 4
	I0915 03:21:42.452689    7432 node_conditions.go:105] duration metric: took 1.8117998s to run NodePressure ...
	I0915 03:21:42.452689    7432 start.go:231] waiting for startup goroutines ...
	I0915 03:21:42.796098    7432 start.go:462] kubectl: 1.20.0, cluster: 1.22.1 (minor skew: 2)
	I0915 03:21:42.798146    7432 out.go:177] 
	W0915 03:21:42.799085    7432 out.go:242] ! C:\Program Files\Docker\Docker\resources\bin\kubectl.exe is version 1.20.0, which may have incompatibilites with Kubernetes 1.22.1.
	I0915 03:21:42.803166    7432 out.go:177]   - Want kubectl v1.22.1? Try 'minikube kubectl -- get pods -A'
	I0915 03:21:42.806501    7432 out.go:177] * Done! kubectl is now configured to use "pause-20210915030944-22140" cluster and "default" namespace by default
	I0915 03:21:43.759655   53904 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-07-30 19:52:33.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2021-09-15 03:21:33.146605000 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0915 03:21:43.759655   53904 machine.go:91] provisioned docker machine in 20.6987476s
	I0915 03:21:43.759945   53904 client.go:171] LocalClient.Create took 50.1534456s
	I0915 03:21:43.759945   53904 start.go:168] duration metric: libmachine.API.Create for "force-systemd-flag-20210915032047-22140" took 50.1537364s
	I0915 03:21:43.759945   53904 start.go:267] post-start starting for "force-systemd-flag-20210915032047-22140" (driver="docker")
	I0915 03:21:43.760178   53904 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0915 03:21:43.773419   53904 ssh_runner.go:152] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0915 03:21:43.784417   53904 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20210915032047-22140
	I0915 03:21:44.587363   53904 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58571 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\force-systemd-flag-20210915032047-22140\id_rsa Username:docker}
	I0915 03:21:45.204599   53904 ssh_runner.go:192] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.4311848s)
	I0915 03:21:45.231587   53904 ssh_runner.go:152] Run: cat /etc/os-release
	I0915 03:21:45.330602   53904 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0915 03:21:45.330602   53904 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0915 03:21:45.330602   53904 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0915 03:21:45.330602   53904 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0915 03:21:45.330602   53904 filesync.go:126] Scanning C:\Users\jenkins\minikube-integration\.minikube\addons for local assets ...
	I0915 03:21:45.331598   53904 filesync.go:126] Scanning C:\Users\jenkins\minikube-integration\.minikube\files for local assets ...
	I0915 03:21:45.332595   53904 filesync.go:149] local asset: C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\221402.pem -> 221402.pem in /etc/ssl/certs
	I0915 03:21:45.332595   53904 vm_assets.go:109] NewFileAsset: C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\221402.pem -> /etc/ssl/certs/221402.pem
	I0915 03:21:45.347907   53904 ssh_runner.go:152] Run: sudo mkdir -p /etc/ssl/certs
	I0915 03:21:45.492942   53904 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\221402.pem --> /etc/ssl/certs/221402.pem (1708 bytes)
	I0915 03:21:45.730178   53904 start.go:270] post-start completed in 1.970007s
	I0915 03:21:45.744181   53904 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-20210915032047-22140
	I0915 03:21:46.578830   53904 profile.go:148] Saving config to C:\Users\jenkins\minikube-integration\.minikube\profiles\force-systemd-flag-20210915032047-22140\config.json ...
	I0915 03:21:46.597836   53904 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 03:21:46.611856   53904 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20210915032047-22140
	I0915 03:21:47.435840   53904 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58571 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\force-systemd-flag-20210915032047-22140\id_rsa Username:docker}
	I0915 03:21:47.999629   53904 ssh_runner.go:192] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.4017983s)
	I0915 03:21:47.999808   53904 start.go:129] duration metric: createHost completed in 54.397526s
	I0915 03:21:47.999808   53904 start.go:80] releasing machines lock for "force-systemd-flag-20210915032047-22140", held for 54.3979322s
	I0915 03:21:48.013535   53904 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-20210915032047-22140
	I0915 03:21:48.764373   53904 ssh_runner.go:152] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0915 03:21:48.773363   53904 ssh_runner.go:152] Run: systemctl --version
	I0915 03:21:48.773363   53904 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20210915032047-22140
	I0915 03:21:48.783386   53904 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20210915032047-22140
	I0915 03:21:49.534275   53904 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58571 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\force-systemd-flag-20210915032047-22140\id_rsa Username:docker}
	I0915 03:21:49.592769   53904 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58571 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\force-systemd-flag-20210915032047-22140\id_rsa Username:docker}
	I0915 03:21:50.404416   53904 ssh_runner.go:192] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.6400491s)
	I0915 03:21:50.404416   53904 ssh_runner.go:192] Completed: systemctl --version: (1.6310589s)
	I0915 03:21:50.423296   53904 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service containerd
	I0915 03:21:50.542298   53904 ssh_runner.go:152] Run: sudo systemctl cat docker.service
	I0915 03:21:50.805755   53904 cruntime.go:255] skipping containerd shutdown because we are bound to it
	I0915 03:21:50.818363   53904 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service crio
	I0915 03:21:50.898345   53904 ssh_runner.go:152] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0915 03:21:51.121974   53904 ssh_runner.go:152] Run: sudo systemctl unmask docker.service
	I0915 03:21:52.360787   53904 ssh_runner.go:192] Completed: sudo systemctl unmask docker.service: (1.2388166s)
	I0915 03:21:52.372933   53904 ssh_runner.go:152] Run: sudo systemctl enable docker.socket
	I0915 03:21:53.820303   53904 ssh_runner.go:192] Completed: sudo systemctl enable docker.socket: (1.4473748s)
	I0915 03:21:53.820721   53904 docker.go:458] Forcing docker to use systemd as cgroup manager...
	I0915 03:21:53.820897   53904 ssh_runner.go:319] scp memory --> /etc/docker/daemon.json (143 bytes)
	I0915 03:21:54.076776   53904 ssh_runner.go:152] Run: sudo systemctl daemon-reload
	I0915 03:21:55.317402   53904 ssh_runner.go:192] Completed: sudo systemctl daemon-reload: (1.2406304s)
	I0915 03:21:55.329430   53904 ssh_runner.go:152] Run: sudo systemctl restart docker
	I0915 03:22:00.872710   53904 ssh_runner.go:192] Completed: sudo systemctl restart docker: (5.5432991s)
	I0915 03:22:00.885914   53904 ssh_runner.go:152] Run: docker version --format {{.Server.Version}}
	I0915 03:22:01.464246   53904 ssh_runner.go:152] Run: docker version --format {{.Server.Version}}
	I0915 03:22:01.928476   53904 out.go:204] * Preparing Kubernetes v1.22.1 on Docker 20.10.8 ...
	I0915 03:22:01.946623   53904 cli_runner.go:115] Run: docker exec -t force-systemd-flag-20210915032047-22140 dig +short host.docker.internal
	I0915 03:22:03.508038   53904 cli_runner.go:168] Completed: docker exec -t force-systemd-flag-20210915032047-22140 dig +short host.docker.internal: (1.5614203s)
	I0915 03:22:03.508349   53904 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0915 03:22:03.532071   53904 ssh_runner.go:152] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0915 03:22:03.582195   53904 ssh_runner.go:152] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 03:22:03.753046   53904 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" force-systemd-flag-20210915032047-22140
	I0915 03:22:04.551530   53904 preload.go:131] Checking if preload exists for k8s version v1.22.1 and runtime docker
	I0915 03:22:04.563656   53904 ssh_runner.go:152] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0915 03:22:04.969190   53904 docker.go:558] Got preloaded images: 
	I0915 03:22:04.969190   53904 docker.go:564] k8s.gcr.io/kube-apiserver:v1.22.1 wasn't preloaded
	I0915 03:22:04.980178   53904 ssh_runner.go:152] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0915 03:22:05.151269   53904 ssh_runner.go:152] Run: which lz4
	I0915 03:22:05.256318   53904 vm_assets.go:109] NewFileAsset: C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v12-v1.22.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0915 03:22:05.291591   53904 ssh_runner.go:152] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0915 03:22:05.380991   53904 ssh_runner.go:309] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0915 03:22:05.381247   53904 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v12-v1.22.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (540060231 bytes)
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2021-09-15 03:10:20 UTC, end at Wed 2021-09-15 03:22:27 UTC. --
	Sep 15 03:14:33 pause-20210915030944-22140 systemd[1]: Stopped Docker Application Container Engine.
	Sep 15 03:14:33 pause-20210915030944-22140 systemd[1]: Starting Docker Application Container Engine...
	Sep 15 03:14:33 pause-20210915030944-22140 dockerd[780]: time="2021-09-15T03:14:33.612116600Z" level=info msg="Starting up"
	Sep 15 03:14:33 pause-20210915030944-22140 dockerd[780]: time="2021-09-15T03:14:33.633818600Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Sep 15 03:14:33 pause-20210915030944-22140 dockerd[780]: time="2021-09-15T03:14:33.633857900Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Sep 15 03:14:33 pause-20210915030944-22140 dockerd[780]: time="2021-09-15T03:14:33.633971900Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Sep 15 03:14:33 pause-20210915030944-22140 dockerd[780]: time="2021-09-15T03:14:33.634002900Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Sep 15 03:14:33 pause-20210915030944-22140 dockerd[780]: time="2021-09-15T03:14:33.650577900Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Sep 15 03:14:33 pause-20210915030944-22140 dockerd[780]: time="2021-09-15T03:14:33.650668600Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Sep 15 03:14:33 pause-20210915030944-22140 dockerd[780]: time="2021-09-15T03:14:33.650718300Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Sep 15 03:14:33 pause-20210915030944-22140 dockerd[780]: time="2021-09-15T03:14:33.650744500Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Sep 15 03:14:33 pause-20210915030944-22140 dockerd[780]: time="2021-09-15T03:14:33.749668500Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Sep 15 03:14:33 pause-20210915030944-22140 dockerd[780]: time="2021-09-15T03:14:33.802855100Z" level=info msg="Loading containers: start."
	Sep 15 03:14:34 pause-20210915030944-22140 dockerd[780]: time="2021-09-15T03:14:34.760899100Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 15 03:14:35 pause-20210915030944-22140 dockerd[780]: time="2021-09-15T03:14:35.124521300Z" level=info msg="Loading containers: done."
	Sep 15 03:14:35 pause-20210915030944-22140 dockerd[780]: time="2021-09-15T03:14:35.224263300Z" level=info msg="Docker daemon" commit=75249d8 graphdriver(s)=overlay2 version=20.10.8
	Sep 15 03:14:35 pause-20210915030944-22140 dockerd[780]: time="2021-09-15T03:14:35.224600700Z" level=info msg="Daemon has completed initialization"
	Sep 15 03:14:35 pause-20210915030944-22140 systemd[1]: Started Docker Application Container Engine.
	Sep 15 03:14:35 pause-20210915030944-22140 dockerd[780]: time="2021-09-15T03:14:35.475910900Z" level=info msg="API listen on [::]:2376"
	Sep 15 03:14:35 pause-20210915030944-22140 dockerd[780]: time="2021-09-15T03:14:35.498137700Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 15 03:16:23 pause-20210915030944-22140 dockerd[780]: time="2021-09-15T03:16:23.812827800Z" level=info msg="ignoring event" container=17b1e3be7bcd4753fa4df551e5447f034a30eac6b21dd285984e98300e9d11f7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 03:18:31 pause-20210915030944-22140 dockerd[780]: time="2021-09-15T03:18:31.075031700Z" level=info msg="ignoring event" container=775b73a720960fb514e9850dd248e7ec7311c1970937a1df0f3f690ce637237b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 03:18:36 pause-20210915030944-22140 dockerd[780]: time="2021-09-15T03:18:36.090126100Z" level=info msg="ignoring event" container=d9ced09b22e872aaa6970cc7f83d50450abea3f3ac2a8306a9355663b1f0f2f6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 03:21:55 pause-20210915030944-22140 dockerd[780]: time="2021-09-15T03:21:55.571919600Z" level=error msg="Handler for GET /v1.41/containers/3a07969773872dad997eb0d0b61b6473de45713f5b262ff3844b4503c788eccc/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
	Sep 15 03:21:55 pause-20210915030944-22140 dockerd[780]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE                  COMMAND                  CREATED          STATUS                       PORTS     NAMES
	3a0796977387   k8s.gcr.io/pause:3.5   "/pause"                 48 seconds ago   Up 34 seconds (Paused)                 k8s_POD_storage-provisioner_kube-system_4520129c-3d86-4b9b-811d-7cba9545d903_0
	697ce18295bf   8d147537fb7d           "/coredns -conf /etc…"   4 minutes ago    Up 4 minutes (Paused)                  k8s_coredns_coredns-78fcd69978-dm895_kube-system_13c9be0b-a7b3-4201-b4f3-b0a0cf66fc3b_0
	aa377b1651ca   36c4ebbc9d97           "/usr/local/bin/kube…"   4 minutes ago    Up 4 minutes (Paused)                  k8s_kube-proxy_kube-proxy-rqd9p_kube-system_f02a5e46-5d3d-458a-ad95-721d55dfbd02_0
	bc0b83c46e60   k8s.gcr.io/pause:3.5   "/pause"                 5 minutes ago    Up 4 minutes (Paused)                  k8s_POD_coredns-78fcd69978-dm895_kube-system_13c9be0b-a7b3-4201-b4f3-b0a0cf66fc3b_0
	3b072e1c1720   k8s.gcr.io/pause:3.5   "/pause"                 5 minutes ago    Up 4 minutes (Paused)                  k8s_POD_kube-proxy-rqd9p_kube-system_f02a5e46-5d3d-458a-ad95-721d55dfbd02_0
	4d57b600089e   6e002eb89a88           "kube-controller-man…"   5 minutes ago    Up 5 minutes (Paused)                  k8s_kube-controller-manager_kube-controller-manager-pause-20210915030944-22140_kube-system_56b476b04ef5f15f84806e4c44e291da_1
	17b1e3be7bcd   6e002eb89a88           "kube-controller-man…"   7 minutes ago    Exited (255) 6 minutes ago             k8s_kube-controller-manager_kube-controller-manager-pause-20210915030944-22140_kube-system_56b476b04ef5f15f84806e4c44e291da_0
	52ce83004dc4   004811815584           "etcd --advertise-cl…"   7 minutes ago    Up 6 minutes (Paused)                  k8s_etcd_etcd-pause-20210915030944-22140_kube-system_372ed4a8abfe18f6535eff70db1e9c5d_0
	43dab7f1c551   aca5ededae9c           "kube-scheduler --au…"   7 minutes ago    Up 6 minutes (Paused)                  k8s_kube-scheduler_kube-scheduler-pause-20210915030944-22140_kube-system_f7d9ba894be6e0b3fd26164cb118c0fa_0
	62d499a155d8   f30469a2491a           "kube-apiserver --ad…"   7 minutes ago    Up 7 minutes (Paused)                  k8s_kube-apiserver_kube-apiserver-pause-20210915030944-22140_kube-system_763180bf5fbd966512f8b3e939b85eff_0
	58f148da551c   k8s.gcr.io/pause:3.5   "/pause"                 7 minutes ago    Up 7 minutes (Paused)                  k8s_POD_etcd-pause-20210915030944-22140_kube-system_372ed4a8abfe18f6535eff70db1e9c5d_0
	f8184333bc6e   k8s.gcr.io/pause:3.5   "/pause"                 7 minutes ago    Up 7 minutes (Paused)                  k8s_POD_kube-scheduler-pause-20210915030944-22140_kube-system_f7d9ba894be6e0b3fd26164cb118c0fa_0
	ceeffa71b613   k8s.gcr.io/pause:3.5   "/pause"                 7 minutes ago    Up 7 minutes (Paused)                  k8s_POD_kube-controller-manager-pause-20210915030944-22140_kube-system_56b476b04ef5f15f84806e4c44e291da_0
	a7c77cee27fc   k8s.gcr.io/pause:3.5   "/pause"                 7 minutes ago    Up 7 minutes (Paused)                  k8s_POD_kube-apiserver-pause-20210915030944-22140_kube-system_763180bf5fbd966512f8b3e939b85eff_0
	time="2021-09-15T03:22:29Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	
	* 
	* ==> coredns [697ce18295bf] <==
	* [WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.4
	linux/amd64, go1.16.4, 053c4d5
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.000003]  ? hrtimer_init+0xde/0xde
	[  +0.000001]  hrtimer_wakeup+0x1e/0x21
	[  +0.000005]  __hrtimer_run_queues+0x117/0x1c4
	[  +0.000002]  ? ktime_get_update_offsets_now+0x36/0x95
	[  +0.000002]  hrtimer_interrupt+0x92/0x165
	[  +0.000003]  hv_stimer0_isr+0x20/0x2d
	[  +0.000007]  hv_stimer0_vector_handler+0x3b/0x57
	[  +0.000009]  hv_stimer0_callback_vector+0xf/0x20
	[  +0.000001]  </IRQ>
	[  +0.000001] RIP: 0010:native_safe_halt+0x7/0x8
	[  +0.000002] Code: 60 02 df f0 83 44 24 fc 00 48 8b 00 a8 08 74 0b 65 81 25 fd b5 6f 69 ff ff ff 7f c3 e8 77 ce 72 ff f4 c3 e8 70 ce 72 ff fb f4 <c3> 0f 1f 44 00 00 53 e8 f1 f5 81 ff 65 8b 35 b3 4b 6f 69 31 ff e8
	[  +0.000001] RSP: 0018:ffff98b6000a3ec8 EFLAGS: 00000246 ORIG_RAX: ffffffffffffff12
	[  +0.000001] RAX: ffffffff9691a410 RBX: 0000000000000001 RCX: ffffffff97253150
	[  +0.000001] RDX: 00000000001bfb3e RSI: 0000000000000001 RDI: 0000000000000001
	[  +0.000001] RBP: 0000000000000000 R08: 011cf099150136ab R09: 0000000000000002
	[  +0.000000] R10: ffff8b9f6df73938 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: ffff8b9fae19e1c0 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000002]  ? ldsem_down_write+0x1da/0x1da
	[  +0.000009]  ? native_safe_halt+0x5/0x8
	[  +0.000001]  default_idle+0x1b/0x2c
	[  +0.000001]  do_idle+0xe5/0x216
	[  +0.000002]  cpu_startup_entry+0x6f/0x71
	[  +0.000003]  start_secondary+0x18e/0x1a9
	[  +0.000006]  secondary_startup_64+0xa4/0xb0
	[  +0.000005] ---[ end trace f027fbf82db24e21 ]---
	
	* 
	* ==> etcd [52ce83004dc4] <==
	* {"level":"info","ts":"2021-09-15T03:18:30.417Z","caller":"traceutil/trace.go:171","msg":"trace[1476229935] linearizableReadLoop","detail":"{readStateIndex:539; appliedIndex:539; }","duration":"127.4385ms","start":"2021-09-15T03:18:30.290Z","end":"2021-09-15T03:18:30.417Z","steps":["trace[1476229935] 'read index received'  (duration: 127.4291ms)","trace[1476229935] 'applied index is now lower than readState.Index'  (duration: 7.6µs)"],"step_count":2}
	{"level":"warn","ts":"2021-09-15T03:18:30.566Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"276.6347ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/flowschemas/\" range_end:\"/registry/flowschemas0\" ","response":"range_response_count:12 size:13525"}
	{"level":"info","ts":"2021-09-15T03:18:30.566Z","caller":"traceutil/trace.go:171","msg":"trace[293910743] range","detail":"{range_begin:/registry/flowschemas/; range_end:/registry/flowschemas0; response_count:12; response_revision:506; }","duration":"277.2159ms","start":"2021-09-15T03:18:30.289Z","end":"2021-09-15T03:18:30.566Z","steps":["trace[293910743] 'agreement among raft nodes before linearized reading'  (duration: 128.5971ms)","trace[293910743] 'range keys from in-memory index tree'  (duration: 148.249ms)"],"step_count":2}
	{"level":"warn","ts":"2021-09-15T03:18:30.567Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"249.1225ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" ","response":"range_response_count:1 size:422"}
	{"level":"info","ts":"2021-09-15T03:18:30.567Z","caller":"traceutil/trace.go:171","msg":"trace[986769912] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:506; }","duration":"249.1613ms","start":"2021-09-15T03:18:30.317Z","end":"2021-09-15T03:18:30.567Z","steps":["trace[986769912] 'agreement among raft nodes before linearized reading'  (duration: 100.0906ms)","trace[986769912] 'range keys from in-memory index tree'  (duration: 148.9977ms)"],"step_count":2}
	{"level":"info","ts":"2021-09-15T03:18:39.490Z","caller":"traceutil/trace.go:171","msg":"trace[485723073] transaction","detail":"{read_only:false; response_revision:508; number_of_response:1; }","duration":"169.1686ms","start":"2021-09-15T03:18:39.321Z","end":"2021-09-15T03:18:39.490Z","steps":[],"step_count":0}
	{"level":"info","ts":"2021-09-15T03:18:44.703Z","caller":"traceutil/trace.go:171","msg":"trace[2028064732] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-rqd9p; range_end:; response_count:1; response_revision:518; }","duration":"125.8153ms","start":"2021-09-15T03:18:44.577Z","end":"2021-09-15T03:18:44.703Z","steps":["trace[2028064732] 'get authentication metadata'  (duration: 15.6583ms)"],"step_count":1}
	{"level":"warn","ts":"2021-09-15T03:18:55.825Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"201.2774ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128007661508971150 > lease_revoke:<id:70cc7be773e54a44>","response":"size:29"}
	{"level":"warn","ts":"2021-09-15T03:19:00.085Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"122.481ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/\" range_end:\"/registry/replicasets0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2021-09-15T03:19:00.114Z","caller":"traceutil/trace.go:171","msg":"trace[1314852551] range","detail":"{range_begin:/registry/replicasets/; range_end:/registry/replicasets0; response_count:0; response_revision:520; }","duration":"161.7181ms","start":"2021-09-15T03:18:59.951Z","end":"2021-09-15T03:19:00.113Z","steps":["trace[1314852551] 'count revisions from in-memory index tree'  (duration: 118.527ms)"],"step_count":1}
	{"level":"info","ts":"2021-09-15T03:19:23.239Z","caller":"traceutil/trace.go:171","msg":"trace[1241979890] linearizableReadLoop","detail":"{readStateIndex:569; appliedIndex:569; }","duration":"113.492ms","start":"2021-09-15T03:19:23.126Z","end":"2021-09-15T03:19:23.239Z","steps":["trace[1241979890] 'read index received'  (duration: 113.4821ms)","trace[1241979890] 'applied index is now lower than readState.Index'  (duration: 8.1µs)"],"step_count":2}
	{"level":"warn","ts":"2021-09-15T03:19:23.240Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"113.9048ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/servicenodeports\" ","response":"range_response_count:1 size:120"}
	{"level":"info","ts":"2021-09-15T03:19:23.240Z","caller":"traceutil/trace.go:171","msg":"trace[767842617] range","detail":"{range_begin:/registry/ranges/servicenodeports; range_end:; response_count:1; response_revision:526; }","duration":"114.0664ms","start":"2021-09-15T03:19:23.126Z","end":"2021-09-15T03:19:23.240Z","steps":["trace[767842617] 'agreement among raft nodes before linearized reading'  (duration: 113.8341ms)"],"step_count":1}
	{"level":"info","ts":"2021-09-15T03:19:50.467Z","caller":"traceutil/trace.go:171","msg":"trace[99173082] transaction","detail":"{read_only:false; response_revision:531; number_of_response:1; }","duration":"110.6779ms","start":"2021-09-15T03:19:50.355Z","end":"2021-09-15T03:19:50.466Z","steps":["trace[99173082] 'process raft request'  (duration: 53.204ms)","trace[99173082] 'compare'  (duration: 53.6227ms)"],"step_count":2}
	{"level":"warn","ts":"2021-09-15T03:21:06.361Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"100.8508ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2021-09-15T03:21:06.379Z","caller":"traceutil/trace.go:171","msg":"trace[1311259410] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:547; }","duration":"118.5816ms","start":"2021-09-15T03:21:06.260Z","end":"2021-09-15T03:21:06.378Z","steps":["trace[1311259410] 'agreement among raft nodes before linearized reading'  (duration: 88.9088ms)"],"step_count":1}
	{"level":"warn","ts":"2021-09-15T03:21:33.263Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"129.0201ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2021-09-15T03:21:33.264Z","caller":"traceutil/trace.go:171","msg":"trace[121166492] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:552; }","duration":"129.4375ms","start":"2021-09-15T03:21:33.134Z","end":"2021-09-15T03:21:33.264Z","steps":["trace[121166492] 'range keys from in-memory index tree'  (duration: 113.8586ms)"],"step_count":1}
	{"level":"warn","ts":"2021-09-15T03:21:41.184Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"100.5895ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2021-09-15T03:21:41.184Z","caller":"traceutil/trace.go:171","msg":"trace[83270311] range","detail":"{range_begin:/registry/runtimeclasses/; range_end:/registry/runtimeclasses0; response_count:0; response_revision:565; }","duration":"100.7654ms","start":"2021-09-15T03:21:41.083Z","end":"2021-09-15T03:21:41.184Z","steps":["trace[83270311] 'agreement among raft nodes before linearized reading'  (duration: 33.2004ms)","trace[83270311] 'count revisions from in-memory index tree'  (duration: 67.3513ms)"],"step_count":2}
	{"level":"warn","ts":"2021-09-15T03:21:47.395Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"104.643ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/\" range_end:\"/registry/leases0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2021-09-15T03:21:47.396Z","caller":"traceutil/trace.go:171","msg":"trace[890763893] range","detail":"{range_begin:/registry/leases/; range_end:/registry/leases0; response_count:0; response_revision:566; }","duration":"130.4192ms","start":"2021-09-15T03:21:47.265Z","end":"2021-09-15T03:21:47.395Z","steps":["trace[890763893] 'agreement among raft nodes before linearized reading'  (duration: 103.8945ms)"],"step_count":1}
	{"level":"warn","ts":"2021-09-15T03:21:47.627Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-09-15T03:21:47.265Z","time spent":"339.2841ms","remote":"127.0.0.1:54648","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":1,"response size":31,"request content":"key:\"/registry/leases/\" range_end:\"/registry/leases0\" count_only:true "}
	{"level":"warn","ts":"2021-09-15T03:21:50.960Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"207.7448ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128007661508971846 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.49.2\" mod_revision:565 > success:<request_put:<key:\"/registry/masterleases/192.168.49.2\" value_size:67 lease:8128007661508971843 >> failure:<request_range:<key:\"/registry/masterleases/192.168.49.2\" > >>","response":"size:16"}
	{"level":"info","ts":"2021-09-15T03:21:50.961Z","caller":"traceutil/trace.go:171","msg":"trace[606781263] transaction","detail":"{read_only:false; response_revision:567; number_of_response:1; }","duration":"208.5921ms","start":"2021-09-15T03:21:50.752Z","end":"2021-09-15T03:21:50.961Z","steps":["trace[606781263] 'compare'  (duration: 207.5444ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  03:22:42 up  1:59,  0 users,  load average: 31.93, 29.72, 17.17
	Linux pause-20210915030944-22140 4.19.121-linuxkit #1 SMP Thu Jan 21 15:36:34 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [62d499a155d8] <==
	* Trace[1978514976]: ---"Listing from storage done" 1081ms (03:17:28.243)
	Trace[1978514976]: [1.082269s] [1.082269s] END
	I0915 03:17:28.297868       1 trace.go:205] Trace[498676314]: "Get" url:/api/v1/namespaces/kube-system/pods/etcd-pause-20210915030944-22140,user-agent:kubelet/v1.22.1 (linux/amd64) kubernetes/632ed30,audit-id:bee32738-5c5b-4c05-9a7c-d25689fd5266,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (15-Sep-2021 03:17:27.319) (total time: 977ms):
	Trace[498676314]: ---"About to write a response" 963ms (03:17:28.283)
	Trace[498676314]: [977.2199ms] [977.2199ms] END
	I0915 03:17:28.376743       1 trace.go:205] Trace[449108547]: "Create" url:/api/v1/namespaces/kube-system/serviceaccounts/coredns/token,user-agent:kubelet/v1.22.1 (linux/amd64) kubernetes/632ed30,audit-id:dc35547f-ed69-4edf-bb9c-29804310f6c6,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (15-Sep-2021 03:17:27.213) (total time: 1163ms):
	Trace[449108547]: ---"Object stored in database" 1161ms (03:17:28.376)
	Trace[449108547]: [1.163004s] [1.163004s] END
	I0915 03:17:29.027915       1 trace.go:205] Trace[1975214077]: "Create" url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.22.1 (linux/amd64) kubernetes/632ed30,audit-id:c15981b7-5c98-4d18-89c6-1e1e351a9fb6,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (15-Sep-2021 03:17:27.032) (total time: 1988ms):
	Trace[1975214077]: [1.988706s] [1.988706s] END
	I0915 03:17:29.035419       1 trace.go:205] Trace[1832931609]: "GuaranteedUpdate etcd3" type:*core.Pod (15-Sep-2021 03:17:28.448) (total time: 586ms):
	Trace[1832931609]: ---"Transaction committed" 566ms (03:17:29.021)
	Trace[1832931609]: [586.4753ms] [586.4753ms] END
	I0915 03:17:29.036365       1 trace.go:205] Trace[765938691]: "Patch" url:/api/v1/namespaces/kube-system/pods/etcd-pause-20210915030944-22140/status,user-agent:kubelet/v1.22.1 (linux/amd64) kubernetes/632ed30,audit-id:6b374624-5731-4f89-a56d-9035d9d94375,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (15-Sep-2021 03:17:28.447) (total time: 588ms):
	Trace[765938691]: ---"Object stored in database" 581ms (03:17:29.035)
	Trace[765938691]: [588.5611ms] [588.5611ms] END
	I0915 03:17:29.401024       1 trace.go:205] Trace[1304855879]: "Create" url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.22.1 (linux/amd64) kubernetes/632ed30,audit-id:bdeab045-774f-49b1-b558-6423b1828263,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (15-Sep-2021 03:17:26.790) (total time: 2604ms):
	Trace[1304855879]: ---"About to convert to expected version" 194ms (03:17:26.984)
	Trace[1304855879]: [2.604108s] [2.604108s] END
	I0915 03:21:19.892670       1 trace.go:205] Trace[1162757291]: "GuaranteedUpdate etcd3" type:*coordination.Lease (15-Sep-2021 03:21:19.319) (total time: 572ms):
	Trace[1162757291]: ---"Transaction committed" 537ms (03:21:19.892)
	Trace[1162757291]: [572.9461ms] [572.9461ms] END
	I0915 03:21:19.893037       1 trace.go:205] Trace[783106692]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-20210915030944-22140,user-agent:kubelet/v1.22.1 (linux/amd64) kubernetes/632ed30,audit-id:aaf7b36b-3bbf-46bd-ad04-4829fd6a0e73,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (15-Sep-2021 03:21:19.319) (total time: 573ms):
	Trace[783106692]: ---"Object stored in database" 573ms (03:21:19.892)
	Trace[783106692]: [573.8048ms] [573.8048ms] END
	
	* 
	* ==> kube-controller-manager [17b1e3be7bcd] <==
	* 	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0002e59c0, 0x5175b80, 0xc0007fd200, 0x1, 0xc000094360)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0x9b
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0002e59c0, 0x3b9aca00, 0x0, 0x1, 0xc000094360)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc0002e59c0, 0x3b9aca00, 0xc000094360)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d
	created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicServingCertificateController).Run
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/tlsconfig.go:247 +0x1d2
	
	goroutine 168 [select]:
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0002e5a70, 0x5175b80, 0xc0007fd140, 0x1, 0xc000094360)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x118
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0002e5a70, 0xdf8475800, 0x0, 0x1, 0xc000094360)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc0002e5a70, 0xdf8475800, 0xc000094360)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d
	created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicServingCertificateController).Run
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/tlsconfig.go:250 +0x24b
	
	goroutine 126 [runnable]:
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*clientStream).awaitRequestCancel(0xc000ae49a0, 0xc000990600)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:343
	created by k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*clientConnReadLoop).handleResponse
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:2056 +0x728
	
	* 
	* ==> kube-controller-manager [4d57b600089e] <==
	* I0915 03:17:16.477374       1 shared_informer.go:247] Caches are synced for cidrallocator 
	I0915 03:17:16.477836       1 shared_informer.go:247] Caches are synced for attach detach 
	I0915 03:17:16.480662       1 shared_informer.go:247] Caches are synced for expand 
	I0915 03:17:16.480713       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0915 03:17:16.480750       1 shared_informer.go:247] Caches are synced for GC 
	I0915 03:17:16.480830       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	I0915 03:17:16.480899       1 shared_informer.go:247] Caches are synced for disruption 
	I0915 03:17:16.480929       1 disruption.go:371] Sending events to api server.
	I0915 03:17:16.481534       1 shared_informer.go:247] Caches are synced for ephemeral 
	I0915 03:17:16.481742       1 shared_informer.go:247] Caches are synced for resource quota 
	I0915 03:17:16.491562       1 shared_informer.go:247] Caches are synced for crt configmap 
	I0915 03:17:16.553860       1 shared_informer.go:247] Caches are synced for resource quota 
	I0915 03:17:16.868678       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0915 03:17:16.984402       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0915 03:17:17.017747       1 range_allocator.go:373] Set node pause-20210915030944-22140 PodCIDR to [10.244.0.0/24]
	I0915 03:17:17.056489       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0915 03:17:17.075912       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0915 03:17:17.253178       1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-pause-20210915030944-22140" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0915 03:17:18.219567       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-rqd9p"
	I0915 03:17:19.378180       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-78fcd69978 to 2"
	I0915 03:17:20.254652       1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-78fcd69978-5g8hd"
	I0915 03:17:20.523554       1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-78fcd69978-dm895"
	I0915 03:17:24.071561       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-78fcd69978 to 1"
	I0915 03:17:24.187216       1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-78fcd69978-5g8hd"
	I0915 03:17:26.379250       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	
	* 
	* ==> kube-proxy [aa377b1651ca] <==
	* I0915 03:18:15.242195       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0915 03:18:15.242336       1 server_others.go:140] Detected node IP 192.168.49.2
	W0915 03:18:15.242418       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
	I0915 03:18:18.622835       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0915 03:18:18.622886       1 server_others.go:212] Using iptables Proxier.
	I0915 03:18:18.622917       1 server_others.go:219] creating dualStackProxier for iptables.
	W0915 03:18:18.622967       1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0915 03:18:18.828156       1 server.go:649] Version: v1.22.1
	I0915 03:18:18.845682       1 config.go:224] Starting endpoint slice config controller
	I0915 03:18:18.845724       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0915 03:18:18.869195       1 config.go:315] Starting service config controller
	I0915 03:18:18.869220       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0915 03:18:19.289849       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	E0915 03:18:19.291324       1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pause-20210915030944-22140.16a4e095c3370128", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc048775eb25b94ec, ext:7178625301, loc:(*time.Location)(0x2d81340)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-pause-20210915030944-22140", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"pause-
20210915030944-22140", UID:"pause-20210915030944-22140", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "pause-20210915030944-22140.16a4e095c3370128" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
	I0915 03:18:19.374727       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [43dab7f1c551] <==
	* E0915 03:16:25.624442       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0915 03:16:25.658723       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0915 03:16:25.827416       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0915 03:16:26.136981       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0915 03:16:26.244895       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0915 03:16:26.321955       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0915 03:16:26.322142       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0915 03:16:26.337042       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0915 03:16:26.556462       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0915 03:16:29.389196       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0915 03:16:29.495504       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0915 03:16:30.187504       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0915 03:16:30.379494       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0915 03:16:30.403819       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0915 03:16:30.495186       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0915 03:16:30.504589       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0915 03:16:30.745607       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0915 03:16:30.879190       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0915 03:16:31.074552       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0915 03:16:31.143364       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0915 03:16:31.467817       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0915 03:16:31.471073       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0915 03:16:32.417193       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0915 03:16:32.717212       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0915 03:16:38.962095       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2021-09-15 03:10:20 UTC, end at Wed 2021-09-15 03:22:45 UTC. --
	Sep 15 03:18:08 pause-20210915030944-22140 kubelet[2804]: I0915 03:18:08.799952    2804 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="bc0b83c46e60d05af4315578cafa6ca47382d734ce31edc43fede7d97b867c3d"
	Sep 15 03:18:11 pause-20210915030944-22140 kubelet[2804]: I0915 03:18:11.171679    2804 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-78fcd69978-5g8hd through plugin: invalid network status for"
	Sep 15 03:18:12 pause-20210915030944-22140 kubelet[2804]: I0915 03:18:12.193795    2804 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-78fcd69978-dm895 through plugin: invalid network status for"
	Sep 15 03:18:13 pause-20210915030944-22140 kubelet[2804]: E0915 03:18:13.066726    2804 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods/burstable/pod06362ae9-472e-4421-9471-6c3da728acb7/d9ced09b22e872aaa6970cc7f83d50450abea3f3ac2a8306a9355663b1f0f2f6\": RecentStats: unable to find data in memory cache]"
	Sep 15 03:18:18 pause-20210915030944-22140 kubelet[2804]: I0915 03:18:18.100056    2804 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-78fcd69978-5g8hd through plugin: invalid network status for"
	Sep 15 03:18:22 pause-20210915030944-22140 kubelet[2804]: I0915 03:18:22.570010    2804 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="d9ced09b22e872aaa6970cc7f83d50450abea3f3ac2a8306a9355663b1f0f2f6"
	Sep 15 03:18:22 pause-20210915030944-22140 kubelet[2804]: I0915 03:18:22.698233    2804 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-78fcd69978-dm895 through plugin: invalid network status for"
	Sep 15 03:18:23 pause-20210915030944-22140 kubelet[2804]: I0915 03:18:23.244206    2804 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-78fcd69978-5g8hd through plugin: invalid network status for"
	Sep 15 03:18:32 pause-20210915030944-22140 kubelet[2804]: E0915 03:18:32.461070    2804 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods/burstable/pod06362ae9-472e-4421-9471-6c3da728acb7/775b73a720960fb514e9850dd248e7ec7311c1970937a1df0f3f690ce637237b\": RecentStats: unable to find data in memory cache]"
	Sep 15 03:18:37 pause-20210915030944-22140 kubelet[2804]: I0915 03:18:37.006996    2804 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htb82\" (UniqueName: \"kubernetes.io/projected/06362ae9-472e-4421-9471-6c3da728acb7-kube-api-access-htb82\") pod \"06362ae9-472e-4421-9471-6c3da728acb7\" (UID: \"06362ae9-472e-4421-9471-6c3da728acb7\") "
	Sep 15 03:18:37 pause-20210915030944-22140 kubelet[2804]: I0915 03:18:37.007237    2804 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/06362ae9-472e-4421-9471-6c3da728acb7-config-volume\") pod \"06362ae9-472e-4421-9471-6c3da728acb7\" (UID: \"06362ae9-472e-4421-9471-6c3da728acb7\") "
	Sep 15 03:18:37 pause-20210915030944-22140 kubelet[2804]: W0915 03:18:37.013015    2804 empty_dir.go:517] Warning: Failed to clear quota on /var/lib/kubelet/pods/06362ae9-472e-4421-9471-6c3da728acb7/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Sep 15 03:18:37 pause-20210915030944-22140 kubelet[2804]: I0915 03:18:37.013359    2804 operation_generator.go:866] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06362ae9-472e-4421-9471-6c3da728acb7-config-volume" (OuterVolumeSpecName: "config-volume") pod "06362ae9-472e-4421-9471-6c3da728acb7" (UID: "06362ae9-472e-4421-9471-6c3da728acb7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Sep 15 03:18:37 pause-20210915030944-22140 kubelet[2804]: I0915 03:18:37.114095    2804 reconciler.go:319] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/06362ae9-472e-4421-9471-6c3da728acb7-config-volume\") on node \"pause-20210915030944-22140\" DevicePath \"\""
	Sep 15 03:18:37 pause-20210915030944-22140 kubelet[2804]: I0915 03:18:37.196248    2804 operation_generator.go:866] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06362ae9-472e-4421-9471-6c3da728acb7-kube-api-access-htb82" (OuterVolumeSpecName: "kube-api-access-htb82") pod "06362ae9-472e-4421-9471-6c3da728acb7" (UID: "06362ae9-472e-4421-9471-6c3da728acb7"). InnerVolumeSpecName "kube-api-access-htb82". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 15 03:18:37 pause-20210915030944-22140 kubelet[2804]: I0915 03:18:37.222702    2804 reconciler.go:319] "Volume detached for volume \"kube-api-access-htb82\" (UniqueName: \"kubernetes.io/projected/06362ae9-472e-4421-9471-6c3da728acb7-kube-api-access-htb82\") on node \"pause-20210915030944-22140\" DevicePath \"\""
	Sep 15 03:18:38 pause-20210915030944-22140 kubelet[2804]: I0915 03:18:38.973125    2804 scope.go:110] "RemoveContainer" containerID="775b73a720960fb514e9850dd248e7ec7311c1970937a1df0f3f690ce637237b"
	Sep 15 03:18:43 pause-20210915030944-22140 kubelet[2804]: I0915 03:18:43.394913    2804 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=06362ae9-472e-4421-9471-6c3da728acb7 path="/var/lib/kubelet/pods/06362ae9-472e-4421-9471-6c3da728acb7/volumes"
	Sep 15 03:19:12 pause-20210915030944-22140 kubelet[2804]: E0915 03:19:12.855337    2804 cadvisor_stats_provider.go:147] "Unable to fetch pod log stats" err="open /var/log/pods/kube-system_coredns-78fcd69978-5g8hd_06362ae9-472e-4421-9471-6c3da728acb7: no such file or directory" pod="kube-system/coredns-78fcd69978-5g8hd"
	Sep 15 03:21:39 pause-20210915030944-22140 kubelet[2804]: I0915 03:21:39.200915    2804 topology_manager.go:200] "Topology Admit Handler"
	Sep 15 03:21:39 pause-20210915030944-22140 kubelet[2804]: I0915 03:21:39.589378    2804 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4520129c-3d86-4b9b-811d-7cba9545d903-tmp\") pod \"storage-provisioner\" (UID: \"4520129c-3d86-4b9b-811d-7cba9545d903\") "
	Sep 15 03:21:39 pause-20210915030944-22140 kubelet[2804]: I0915 03:21:39.712754    2804 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph5gp\" (UniqueName: \"kubernetes.io/projected/4520129c-3d86-4b9b-811d-7cba9545d903-kube-api-access-ph5gp\") pod \"storage-provisioner\" (UID: \"4520129c-3d86-4b9b-811d-7cba9545d903\") "
	Sep 15 03:21:54 pause-20210915030944-22140 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Sep 15 03:21:55 pause-20210915030944-22140 systemd[1]: kubelet.service: Succeeded.
	Sep 15 03:21:55 pause-20210915030944-22140 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Executing "docker container inspect pause-20210915030944-22140 --format={{.State.Status}}" took an unusually long time: 2.5391144s
	* Restarting the docker service may improve performance.
	E0915 03:22:41.001443   23852 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: "\n** stderr ** \nUnable to connect to the server: net/http: TLS handshake timeout\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
--- FAIL: TestPause/serial/VerifyStatus (44.99s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (91.47s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe delete -p pause-20210915030944-22140 --alsologtostderr -v=5
E0915 03:23:23.732155   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210915030334-22140\client.crt: The system cannot find the path specified.
pause_test.go:130: (dbg) Non-zero exit: out/minikube-windows-amd64.exe delete -p pause-20210915030944-22140 --alsologtostderr -v=5: exit status 1 (1m20.4898545s)

                                                
                                                
-- stdout --
	* Deleting "pause-20210915030944-22140" in docker ...
	* Deleting container "pause-20210915030944-22140" ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 03:23:24.156603    8900 out.go:298] Setting OutFile to fd 1812 ...
	I0915 03:23:24.158610    8900 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0915 03:23:24.158610    8900 out.go:311] Setting ErrFile to fd 1820...
	I0915 03:23:24.158610    8900 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0915 03:23:24.204983    8900 cli_runner.go:115] Run: docker ps -a --filter label=name.minikube.sigs.k8s.io --format {{.Names}}
	I0915 03:23:26.728769    8900 cli_runner.go:168] Completed: docker ps -a --filter label=name.minikube.sigs.k8s.io --format {{.Names}}: (2.5237945s)
	I0915 03:23:26.730947    8900 config.go:177] Loaded profile config "force-systemd-flag-20210915032047-22140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.1
	I0915 03:23:26.731866    8900 config.go:177] Loaded profile config "pause-20210915030944-22140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.1
	I0915 03:23:26.732469    8900 config.go:177] Loaded profile config "running-upgrade-20210915030944-22140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0915 03:23:26.732945    8900 config.go:177] Loaded profile config "stopped-upgrade-20210915030944-22140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0915 03:23:26.733260    8900 config.go:177] Loaded profile config "pause-20210915030944-22140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.1
	I0915 03:23:26.733260    8900 delete.go:229] DeleteProfiles
	I0915 03:23:26.733260    8900 delete.go:257] Deleting pause-20210915030944-22140
	I0915 03:23:26.733570    8900 delete.go:262] pause-20210915030944-22140 configuration: &{Name:pause-20210915030944-22140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.1 ClusterName:pause-20210915030944-22140 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.1 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0915 03:23:26.737579    8900 out.go:177] * Deleting "pause-20210915030944-22140" in docker ...
	I0915 03:23:26.757881    8900 delete.go:48] deleting possible leftovers for pause-20210915030944-22140 (driver=docker) ...
	I0915 03:23:26.766835    8900 cli_runner.go:115] Run: docker ps -a --filter label=name.minikube.sigs.k8s.io=pause-20210915030944-22140 --format {{.Names}}
	I0915 03:23:27.652197    8900 out.go:177] * Deleting container "pause-20210915030944-22140" ...
	I0915 03:23:27.665950    8900 cli_runner.go:115] Run: docker container inspect pause-20210915030944-22140 --format={{.State.Status}}
	I0915 03:23:28.581315    8900 cli_runner.go:115] Run: docker exec --privileged -t pause-20210915030944-22140 /bin/bash -c "sudo init 0"
	I0915 03:23:30.403785    8900 cli_runner.go:168] Completed: docker exec --privileged -t pause-20210915030944-22140 /bin/bash -c "sudo init 0": (1.8224771s)
	I0915 03:23:31.421363    8900 cli_runner.go:115] Run: docker container inspect pause-20210915030944-22140 --format={{.State.Status}}
	I0915 03:23:32.272830    8900 oci.go:649] temporary error: container pause-20210915030944-22140 status is Running but expect it to be exited
	I0915 03:23:32.272830    8900 oci.go:655] Successfully shutdown container pause-20210915030944-22140
	I0915 03:23:32.284311    8900 cli_runner.go:115] Run: docker rm -f -v pause-20210915030944-22140

                                                
                                                
** /stderr **
pause_test.go:132: failed to delete minikube with args: "out/minikube-windows-amd64.exe delete -p pause-20210915030944-22140 --alsologtostderr -v=5" : exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestPause/serial/DeletePaused]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect pause-20210915030944-22140
helpers_test.go:236: (dbg) docker inspect pause-20210915030944-22140:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e485399e65f18dd556023f0f68476a7039ddb5a092e9345973c7a0522e1e84eb",
	        "Created": "2021-09-15T03:10:12.8998703Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 124074,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-09-15T03:10:17.1935737Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:83b5a81388468b1ffcd3874b4f24c1406c63c33ac07797cc8bed6ad0207d36a8",
	        "ResolvConfPath": "/var/lib/docker/containers/e485399e65f18dd556023f0f68476a7039ddb5a092e9345973c7a0522e1e84eb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e485399e65f18dd556023f0f68476a7039ddb5a092e9345973c7a0522e1e84eb/hostname",
	        "HostsPath": "/var/lib/docker/containers/e485399e65f18dd556023f0f68476a7039ddb5a092e9345973c7a0522e1e84eb/hosts",
	        "LogPath": "/var/lib/docker/containers/e485399e65f18dd556023f0f68476a7039ddb5a092e9345973c7a0522e1e84eb/e485399e65f18dd556023f0f68476a7039ddb5a092e9345973c7a0522e1e84eb-json.log",
	        "Name": "/pause-20210915030944-22140",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-20210915030944-22140:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-20210915030944-22140",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/191f9a427455ca5151bc1cb50762ff819154c61a97f8ee85b3497791d3112c02-init/diff:/var/lib/docker/overlay2/81b5ed92bfb1e2a2a0e307c706b587bea810390dd4cdeffdaab53cb2bea532a6/diff:/var/lib/docker/overlay2/9560b70ae747eb38506ca99f7bdf1b19d69a399aa855bf6d066d5631b126dae0/diff:/var/lib/docker/overlay2/695fbfd66132a632f9cf21a1dbf1c4585ecf3d79d4ec664dc7322dbe57733e22/diff:/var/lib/docker/overlay2/db1f669858e6abde6d71803adf0e4dab516d446780d5e6b1fa82ed6e2c992d39/diff:/var/lib/docker/overlay2/fab89974c291c465525b131b7fd3c3d267c0435e58b67e536b1f5e99b0fe3552/diff:/var/lib/docker/overlay2/7d5946148c5ebf869abcd61af8cbd81254b96679a59bff1399fa76d06f970a03/diff:/var/lib/docker/overlay2/ac34ffb8ff292d487d8e0007c602732cac31fc43cc9dd73014f4f7f6731002e4/diff:/var/lib/docker/overlay2/c79772dfc8b60a34db55f8f7bdd7eb21bdb2ae1ebae9e19320eb82d243476de1/diff:/var/lib/docker/overlay2/5f0227571cb11adf4a20233b21288f6215d7ee4baa55da18a29c55f255c3f91b/diff:/var/lib/docker/overlay2/8f8a0a
55c9a3d7643b70fafbe1d581deef7a9142bb7504cade2efea33d17c8b6/diff:/var/lib/docker/overlay2/855d9e351347b1bfa0c8fcdd68ca509489970443ce6ac3f078a84319bbdbb0de/diff:/var/lib/docker/overlay2/d6da6485052539019c636fe8ca30537f92704bc855db6bb09a9228e17d5e5ee1/diff:/var/lib/docker/overlay2/3a712bb22c438ea19740b4d19771cd31cbd08e2f23647daf15e09967798d671d/diff:/var/lib/docker/overlay2/e8f4cc7b40bc0b3a9e62ea0d4f5ca169aab3e908980e13c881a98909769e05a7/diff:/var/lib/docker/overlay2/7364b0516116b13f8d51a574ea9312cc8be87bf0923e8ebe0018085133e57195/diff:/var/lib/docker/overlay2/10d8c9ca18bc3463470c25ce09aa92dc1df0366115c9fd5a22e67d1369e27b72/diff:/var/lib/docker/overlay2/e8ad5dbce212f833465ffdc136c8c744beb3bfe489d7f20f82084f854ab617cd/diff:/var/lib/docker/overlay2/391d7b820cdbb31a7bcc9bd350aff08e83bc2f5083fa09d2d7c1db69d1861b08/diff:/var/lib/docker/overlay2/394198ca9ba772f189cefae2c09414df3798734482a0159958ad4c74374079e8/diff:/var/lib/docker/overlay2/c3620c3c820e1cc79a02390c9ede0beacdc7fe42aa0e9564d27d6c793741eafe/diff:/var/lib/d
ocker/overlay2/9b11f1c010dca16f2c216392f2d3c5ec585e7d2ca91eb0a4824410accaba4ef3/diff:/var/lib/docker/overlay2/d8e94cabdfcf34c1c2ecb5355519daea41ba85e90131944f14c6c5faadb3f538/diff:/var/lib/docker/overlay2/335c17cc3e6bcc49659f681fefa84f63f496fab770f62dd31577690f8e3958b6/diff:/var/lib/docker/overlay2/5ef44871aef3ad96e532fdbc78e5379afd65c7ffd39bed734ed35daf134257b5/diff:/var/lib/docker/overlay2/ce73bde16589364238c0bb925bbd93f9b2b9c5e2f3267cc196298f62fbc08342/diff:/var/lib/docker/overlay2/461113b8bc693d226593885e543b82eac9a75ea77d0bcdaa60551cca12495538/diff:/var/lib/docker/overlay2/f7d47793cf5882d3e0b92ebb0d7d2456fc621d6db83cb2439f96c4b248b11d25/diff:/var/lib/docker/overlay2/a8e74e4377f38c1a50d9a335bfc92405a4df112abdcbd2555cbe3b592f071fd5/diff:/var/lib/docker/overlay2/405812e0a303b666cd7c1c0102d8f415494b9641e1f5ab9404e146c2265592cb/diff:/var/lib/docker/overlay2/deecfc978d174b5d2c0a209b450d0fa15828234099690cc9092c6ff67a1926d2/diff:/var/lib/docker/overlay2/6fa41c9e75c99fb82729fdd55e5653ce5b7edf256a1dd8791c3012cf210
7f486/diff:/var/lib/docker/overlay2/2dd2dde99da44abd645912f40fdb7d06e201a622cccf049222fa9a53ab6ca234/diff:/var/lib/docker/overlay2/a73187a91c6737ec4627be55f4b58dab9d4ef30412857cbf1cd6e6778962c9f4/diff:/var/lib/docker/overlay2/7fcd2796c0a1717ddf6c90aad88aff2e11a87b836d8761e756b6bc7a292ed570/diff:/var/lib/docker/overlay2/276597df229fc32d0d371563f135664fa4bef3fbc20372998b7b051504e6188a/diff:/var/lib/docker/overlay2/28f6cf4ea77b5f1df2373079b5b3c9b2ec7e95488cec51c54e7ff22f8fea2f36/diff:/var/lib/docker/overlay2/301627855ef95ac8b04f9b404290e80b6a94b9637ec2ca0c31b5701c6ac786fd/diff:/var/lib/docker/overlay2/a589a72c723642d2bb727fead8edfcaffaca10eed1bb4af32fac19fb6fc32874/diff:/var/lib/docker/overlay2/90d1c9e6fe8a1c74ac53d78f9a0b7ee36fc624becac59c2a6056c004ebe45e1e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/191f9a427455ca5151bc1cb50762ff819154c61a97f8ee85b3497791d3112c02/merged",
	                "UpperDir": "/var/lib/docker/overlay2/191f9a427455ca5151bc1cb50762ff819154c61a97f8ee85b3497791d3112c02/diff",
	                "WorkDir": "/var/lib/docker/overlay2/191f9a427455ca5151bc1cb50762ff819154c61a97f8ee85b3497791d3112c02/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-20210915030944-22140",
	                "Source": "/var/lib/docker/volumes/pause-20210915030944-22140/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-20210915030944-22140",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-20210915030944-22140",
	                "name.minikube.sigs.k8s.io": "pause-20210915030944-22140",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e6fda170d73e244bd6aad9a26e46ef371ebb8ce6164861df6cd909a40fd3abd0",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58455"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58456"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58452"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58453"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58454"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e6fda170d73e",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-20210915030944-22140": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "e485399e65f1",
	                        "pause-20210915030944-22140"
	                    ],
	                    "NetworkID": "2fd93bbee36130a1ee184cebd0d9aa1d8a4b662381088e940005af0feac8e13a",
	                    "EndpointID": "ad3bf326aa3567be8405b8ecd24754e80fa72e45792f36c5a0917136cd4daf9e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-20210915030944-22140 -n pause-20210915030944-22140
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-20210915030944-22140 -n pause-20210915030944-22140: exit status 3 (4.7617664s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0915 03:24:49.791103    8768 status.go:374] failed to get storage capacity of /var: NewSession: new client: new client: ssh: handshake failed: EOF
	E0915 03:24:49.791103    8768 status.go:247] status error: NewSession: new client: new client: ssh: handshake failed: EOF

                                                
                                                
** /stderr **
helpers_test.go:240: status error: exit status 3 (may be ok)
helpers_test.go:242: "pause-20210915030944-22140" host is not running, skipping log retrieval (state="Error")
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestPause/serial/DeletePaused]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect pause-20210915030944-22140
helpers_test.go:236: (dbg) docker inspect pause-20210915030944-22140:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e485399e65f18dd556023f0f68476a7039ddb5a092e9345973c7a0522e1e84eb",
	        "Created": "2021-09-15T03:10:12.8998703Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 124074,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-09-15T03:10:17.1935737Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:83b5a81388468b1ffcd3874b4f24c1406c63c33ac07797cc8bed6ad0207d36a8",
	        "ResolvConfPath": "/var/lib/docker/containers/e485399e65f18dd556023f0f68476a7039ddb5a092e9345973c7a0522e1e84eb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e485399e65f18dd556023f0f68476a7039ddb5a092e9345973c7a0522e1e84eb/hostname",
	        "HostsPath": "/var/lib/docker/containers/e485399e65f18dd556023f0f68476a7039ddb5a092e9345973c7a0522e1e84eb/hosts",
	        "LogPath": "/var/lib/docker/containers/e485399e65f18dd556023f0f68476a7039ddb5a092e9345973c7a0522e1e84eb/e485399e65f18dd556023f0f68476a7039ddb5a092e9345973c7a0522e1e84eb-json.log",
	        "Name": "/pause-20210915030944-22140",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-20210915030944-22140:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-20210915030944-22140",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/191f9a427455ca5151bc1cb50762ff819154c61a97f8ee85b3497791d3112c02-init/diff:/var/lib/docker/overlay2/81b5ed92bfb1e2a2a0e307c706b587bea810390dd4cdeffdaab53cb2bea532a6/diff:/var/lib/docker/overlay2/9560b70ae747eb38506ca99f7bdf1b19d69a399aa855bf6d066d5631b126dae0/diff:/var/lib/docker/overlay2/695fbfd66132a632f9cf21a1dbf1c4585ecf3d79d4ec664dc7322dbe57733e22/diff:/var/lib/docker/overlay2/db1f669858e6abde6d71803adf0e4dab516d446780d5e6b1fa82ed6e2c992d39/diff:/var/lib/docker/overlay2/fab89974c291c465525b131b7fd3c3d267c0435e58b67e536b1f5e99b0fe3552/diff:/var/lib/docker/overlay2/7d5946148c5ebf869abcd61af8cbd81254b96679a59bff1399fa76d06f970a03/diff:/var/lib/docker/overlay2/ac34ffb8ff292d487d8e0007c602732cac31fc43cc9dd73014f4f7f6731002e4/diff:/var/lib/docker/overlay2/c79772dfc8b60a34db55f8f7bdd7eb21bdb2ae1ebae9e19320eb82d243476de1/diff:/var/lib/docker/overlay2/5f0227571cb11adf4a20233b21288f6215d7ee4baa55da18a29c55f255c3f91b/diff:/var/lib/docker/overlay2/8f8a0a
55c9a3d7643b70fafbe1d581deef7a9142bb7504cade2efea33d17c8b6/diff:/var/lib/docker/overlay2/855d9e351347b1bfa0c8fcdd68ca509489970443ce6ac3f078a84319bbdbb0de/diff:/var/lib/docker/overlay2/d6da6485052539019c636fe8ca30537f92704bc855db6bb09a9228e17d5e5ee1/diff:/var/lib/docker/overlay2/3a712bb22c438ea19740b4d19771cd31cbd08e2f23647daf15e09967798d671d/diff:/var/lib/docker/overlay2/e8f4cc7b40bc0b3a9e62ea0d4f5ca169aab3e908980e13c881a98909769e05a7/diff:/var/lib/docker/overlay2/7364b0516116b13f8d51a574ea9312cc8be87bf0923e8ebe0018085133e57195/diff:/var/lib/docker/overlay2/10d8c9ca18bc3463470c25ce09aa92dc1df0366115c9fd5a22e67d1369e27b72/diff:/var/lib/docker/overlay2/e8ad5dbce212f833465ffdc136c8c744beb3bfe489d7f20f82084f854ab617cd/diff:/var/lib/docker/overlay2/391d7b820cdbb31a7bcc9bd350aff08e83bc2f5083fa09d2d7c1db69d1861b08/diff:/var/lib/docker/overlay2/394198ca9ba772f189cefae2c09414df3798734482a0159958ad4c74374079e8/diff:/var/lib/docker/overlay2/c3620c3c820e1cc79a02390c9ede0beacdc7fe42aa0e9564d27d6c793741eafe/diff:/var/lib/d
ocker/overlay2/9b11f1c010dca16f2c216392f2d3c5ec585e7d2ca91eb0a4824410accaba4ef3/diff:/var/lib/docker/overlay2/d8e94cabdfcf34c1c2ecb5355519daea41ba85e90131944f14c6c5faadb3f538/diff:/var/lib/docker/overlay2/335c17cc3e6bcc49659f681fefa84f63f496fab770f62dd31577690f8e3958b6/diff:/var/lib/docker/overlay2/5ef44871aef3ad96e532fdbc78e5379afd65c7ffd39bed734ed35daf134257b5/diff:/var/lib/docker/overlay2/ce73bde16589364238c0bb925bbd93f9b2b9c5e2f3267cc196298f62fbc08342/diff:/var/lib/docker/overlay2/461113b8bc693d226593885e543b82eac9a75ea77d0bcdaa60551cca12495538/diff:/var/lib/docker/overlay2/f7d47793cf5882d3e0b92ebb0d7d2456fc621d6db83cb2439f96c4b248b11d25/diff:/var/lib/docker/overlay2/a8e74e4377f38c1a50d9a335bfc92405a4df112abdcbd2555cbe3b592f071fd5/diff:/var/lib/docker/overlay2/405812e0a303b666cd7c1c0102d8f415494b9641e1f5ab9404e146c2265592cb/diff:/var/lib/docker/overlay2/deecfc978d174b5d2c0a209b450d0fa15828234099690cc9092c6ff67a1926d2/diff:/var/lib/docker/overlay2/6fa41c9e75c99fb82729fdd55e5653ce5b7edf256a1dd8791c3012cf210
7f486/diff:/var/lib/docker/overlay2/2dd2dde99da44abd645912f40fdb7d06e201a622cccf049222fa9a53ab6ca234/diff:/var/lib/docker/overlay2/a73187a91c6737ec4627be55f4b58dab9d4ef30412857cbf1cd6e6778962c9f4/diff:/var/lib/docker/overlay2/7fcd2796c0a1717ddf6c90aad88aff2e11a87b836d8761e756b6bc7a292ed570/diff:/var/lib/docker/overlay2/276597df229fc32d0d371563f135664fa4bef3fbc20372998b7b051504e6188a/diff:/var/lib/docker/overlay2/28f6cf4ea77b5f1df2373079b5b3c9b2ec7e95488cec51c54e7ff22f8fea2f36/diff:/var/lib/docker/overlay2/301627855ef95ac8b04f9b404290e80b6a94b9637ec2ca0c31b5701c6ac786fd/diff:/var/lib/docker/overlay2/a589a72c723642d2bb727fead8edfcaffaca10eed1bb4af32fac19fb6fc32874/diff:/var/lib/docker/overlay2/90d1c9e6fe8a1c74ac53d78f9a0b7ee36fc624becac59c2a6056c004ebe45e1e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/191f9a427455ca5151bc1cb50762ff819154c61a97f8ee85b3497791d3112c02/merged",
	                "UpperDir": "/var/lib/docker/overlay2/191f9a427455ca5151bc1cb50762ff819154c61a97f8ee85b3497791d3112c02/diff",
	                "WorkDir": "/var/lib/docker/overlay2/191f9a427455ca5151bc1cb50762ff819154c61a97f8ee85b3497791d3112c02/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-20210915030944-22140",
	                "Source": "/var/lib/docker/volumes/pause-20210915030944-22140/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-20210915030944-22140",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-20210915030944-22140",
	                "name.minikube.sigs.k8s.io": "pause-20210915030944-22140",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e6fda170d73e244bd6aad9a26e46ef371ebb8ce6164861df6cd909a40fd3abd0",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58455"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58456"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58452"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58453"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58454"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e6fda170d73e",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-20210915030944-22140": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "e485399e65f1",
	                        "pause-20210915030944-22140"
	                    ],
	                    "NetworkID": "2fd93bbee36130a1ee184cebd0d9aa1d8a4b662381088e940005af0feac8e13a",
	                    "EndpointID": "ad3bf326aa3567be8405b8ecd24754e80fa72e45792f36c5a0917136cd4daf9e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-20210915030944-22140 -n pause-20210915030944-22140
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-20210915030944-22140 -n pause-20210915030944-22140: exit status 3 (4.5029274s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0915 03:24:55.105272   40004 status.go:374] failed to get storage capacity of /var: NewSession: new client: new client: ssh: handshake failed: EOF
	E0915 03:24:55.105272   40004 status.go:247] status error: NewSession: new client: new client: ssh: handshake failed: EOF

                                                
                                                
** /stderr **
helpers_test.go:240: status error: exit status 3 (may be ok)
helpers_test.go:242: "pause-20210915030944-22140" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestPause/serial/DeletePaused (91.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (720.81s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-20210915034542-22140 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.22.2-rc.0
E0915 04:06:12.966010   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\old-k8s-version-20210915033621-22140\client.crt: The system cannot find the path specified.
E0915 04:06:12.972335   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\old-k8s-version-20210915033621-22140\client.crt: The system cannot find the path specified.
E0915 04:06:12.982926   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\old-k8s-version-20210915033621-22140\client.crt: The system cannot find the path specified.
E0915 04:06:13.003952   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\old-k8s-version-20210915033621-22140\client.crt: The system cannot find the path specified.
E0915 04:06:13.045466   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\old-k8s-version-20210915033621-22140\client.crt: The system cannot find the path specified.
E0915 04:06:13.128133   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\old-k8s-version-20210915033621-22140\client.crt: The system cannot find the path specified.
E0915 04:06:13.289262   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\old-k8s-version-20210915033621-22140\client.crt: The system cannot find the path specified.
E0915 04:06:13.609750   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\old-k8s-version-20210915033621-22140\client.crt: The system cannot find the path specified.
E0915 04:06:14.252916   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\old-k8s-version-20210915033621-22140\client.crt: The system cannot find the path specified.
E0915 04:06:15.533019   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\old-k8s-version-20210915033621-22140\client.crt: The system cannot find the path specified.
E0915 04:06:18.095233   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\old-k8s-version-20210915033621-22140\client.crt: The system cannot find the path specified.
E0915 04:06:23.216116   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\old-k8s-version-20210915033621-22140\client.crt: The system cannot find the path specified.
E0915 04:06:25.228027   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915013001-22140\client.crt: The system cannot find the path specified.
E0915 04:06:33.457708   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\old-k8s-version-20210915033621-22140\client.crt: The system cannot find the path specified.
E0915 04:06:53.939692   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\old-k8s-version-20210915033621-22140\client.crt: The system cannot find the path specified.
E0915 04:07:34.901089   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\old-k8s-version-20210915033621-22140\client.crt: The system cannot find the path specified.
E0915 04:08:21.190844   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915015618-22140\client.crt: The system cannot find the path specified.
E0915 04:08:22.120776   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915013001-22140\client.crt: The system cannot find the path specified.
E0915 04:08:23.721803   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210915030334-22140\client.crt: The system cannot find the path specified.
E0915 04:08:56.823280   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\old-k8s-version-20210915033621-22140\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p no-preload-20210915034542-22140 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.22.2-rc.0: exit status 1 (11m28.4126663s)

                                                
                                                
-- stdout --
	* [no-preload-20210915034542-22140] minikube v1.23.0 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	  - KUBECONFIG=C:\Users\jenkins\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12425
	* Using the docker driver based on existing profile
	* Starting control plane node no-preload-20210915034542-22140 in cluster no-preload-20210915034542-22140
	* Pulling base image ...
	* Restarting existing docker container for "no-preload-20210915034542-22140" ...
	* Preparing Kubernetes v1.22.2-rc.0 on Docker 20.10.8 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 04:04:14.531591   24768 out.go:298] Setting OutFile to fd 2016 ...
	I0915 04:04:14.532601   24768 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0915 04:04:14.532601   24768 out.go:311] Setting ErrFile to fd 2112...
	I0915 04:04:14.532601   24768 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0915 04:04:14.554615   24768 out.go:305] Setting JSON to false
	I0915 04:04:14.559594   24768 start.go:111] hostinfo: {"hostname":"windows-server-1","uptime":10281437,"bootTime":1621397217,"procs":157,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9879231f-6171-435d-bab4-5b366cc6391b"}
	W0915 04:04:14.560594   24768 start.go:119] gopshost.Virtualization returned error: not implemented yet
	I0915 04:04:14.569008   24768 out.go:177] * [no-preload-20210915034542-22140] minikube v1.23.0 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	I0915 04:04:14.569539   24768 notify.go:169] Checking for updates...
	I0915 04:04:14.572387   24768 out.go:177]   - KUBECONFIG=C:\Users\jenkins\minikube-integration\kubeconfig
	I0915 04:04:14.575378   24768 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins\minikube-integration\.minikube
	I0915 04:04:14.578213   24768 out.go:177]   - MINIKUBE_LOCATION=12425
	I0915 04:04:14.579796   24768 config.go:177] Loaded profile config "no-preload-20210915034542-22140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.2-rc.0
	I0915 04:04:14.581192   24768 driver.go:343] Setting default libvirt URI to qemu:///system
	I0915 04:04:17.111074   24768 docker.go:132] docker version: linux-20.10.5
	I0915 04:04:17.128171   24768 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0915 04:04:18.561308   24768 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.4331416s)
	I0915 04:04:18.562330   24768 info.go:263] docker info: {ID:6FWJ:GOIP:3UEJ:4EPV:BN5V:RES7:2PF6:QH5I:B3LP:YJP2:JQEM:LHWQ Containers:5 ContainersRunning:4 ContainersPaused:0 ContainersStopped:1 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:78 OomKillDisable:true NGoroutines:74 SystemTime:2021-09-15 04:04:17.9093571 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0915 04:04:18.566334   24768 out.go:177] * Using the docker driver based on existing profile
	I0915 04:04:18.566334   24768 start.go:278] selected driver: docker
	I0915 04:04:18.566334   24768 start.go:751] validating driver "docker" against &{Name:no-preload-20210915034542-22140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.2-rc.0 ClusterName:no-preload-20210915034542-22140 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.22.2-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Net
work: MultiNodeRequested:false ExtraDisks:0}
	I0915 04:04:18.566334   24768 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0915 04:04:18.878186   24768 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0915 04:04:20.524893   24768 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.646713s)
	I0915 04:04:20.525076   24768 info.go:263] docker info: {ID:6FWJ:GOIP:3UEJ:4EPV:BN5V:RES7:2PF6:QH5I:B3LP:YJP2:JQEM:LHWQ Containers:5 ContainersRunning:4 ContainersPaused:0 ContainersStopped:1 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:78 OomKillDisable:true NGoroutines:74 SystemTime:2021-09-15 04:04:19.8690213 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0915 04:04:20.525984   24768 start_flags.go:737] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 04:04:20.525984   24768 cni.go:93] Creating CNI manager for ""
	I0915 04:04:20.525984   24768 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0915 04:04:20.525984   24768 start_flags.go:278] config:
	{Name:no-preload-20210915034542-22140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.2-rc.0 ClusterName:no-preload-20210915034542-22140 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.22.2-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0915 04:04:20.530121   24768 out.go:177] * Starting control plane node no-preload-20210915034542-22140 in cluster no-preload-20210915034542-22140
	I0915 04:04:20.530280   24768 cache.go:118] Beginning downloading kic base image for docker with docker
	I0915 04:04:20.532838   24768 out.go:177] * Pulling base image ...
	I0915 04:04:20.533247   24768 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 in local docker daemon
	I0915 04:04:20.533247   24768 preload.go:131] Checking if preload exists for k8s version v1.22.2-rc.0 and runtime docker
	I0915 04:04:20.533716   24768 profile.go:148] Saving config to C:\Users\jenkins\minikube-integration\.minikube\profiles\no-preload-20210915034542-22140\config.json ...
	I0915 04:04:20.533716   24768 localpath.go:146] windows sanitize: C:\Users\jenkins\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\metrics-scraper:v1.0.4 -> C:\Users\jenkins\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\metrics-scraper_v1.0.4
	I0915 04:04:20.533893   24768 localpath.go:146] windows sanitize: C:\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\pause:3.5 -> C:\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\pause_3.5
	I0915 04:04:20.533893   24768 localpath.go:146] windows sanitize: C:\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\coredns\coredns:v1.8.4 -> C:\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\coredns\coredns_v1.8.4
	I0915 04:04:20.534277   24768 localpath.go:146] windows sanitize: C:\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-controller-manager:v1.22.2-rc.0 -> C:\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.22.2-rc.0
	I0915 04:04:20.535283   24768 localpath.go:146] windows sanitize: C:\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-scheduler:v1.22.2-rc.0 -> C:\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-scheduler_v1.22.2-rc.0
	I0915 04:04:20.535283   24768 localpath.go:146] windows sanitize: C:\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-proxy:v1.22.2-rc.0 -> C:\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-proxy_v1.22.2-rc.0
	I0915 04:04:20.535283   24768 localpath.go:146] windows sanitize: C:\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-apiserver:v1.22.2-rc.0 -> C:\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-apiserver_v1.22.2-rc.0
	I0915 04:04:20.535283   24768 localpath.go:146] windows sanitize: C:\Users\jenkins\minikube-integration\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins\minikube-integration\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner_v5
	I0915 04:04:20.535283   24768 localpath.go:146] windows sanitize: C:\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\etcd:3.5.0-0 -> C:\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\etcd_3.5.0-0
	I0915 04:04:20.535523   24768 localpath.go:146] windows sanitize: C:\Users\jenkins\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\dashboard:v2.1.0 -> C:\Users\jenkins\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\dashboard_v2.1.0
	I0915 04:04:20.873450   24768 cache.go:108] acquiring lock: {Name:mkfe443c64d1a3dae7531e1da24945fa4d1b684d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 04:04:20.874461   24768 cache.go:116] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\dashboard_v2.1.0 exists
	I0915 04:04:20.875425   24768 cache.go:97] cache image "docker.io/kubernetesui/dashboard:v2.1.0" -> "C:\\Users\\jenkins\\minikube-integration\\.minikube\\cache\\images\\docker.io\\kubernetesui\\dashboard_v2.1.0" took 339.7268ms
	I0915 04:04:20.875425   24768 cache.go:81] save to tar file docker.io/kubernetesui/dashboard:v2.1.0 -> C:\Users\jenkins\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\dashboard_v2.1.0 succeeded
	I0915 04:04:20.883406   24768 cache.go:108] acquiring lock: {Name:mk3a8e472a33e4b070875cac486aa51a260bc260 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 04:04:20.883406   24768 cache.go:116] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-apiserver_v1.22.2-rc.0 exists
	I0915 04:04:20.884406   24768 cache.go:97] cache image "k8s.gcr.io/kube-apiserver:v1.22.2-rc.0" -> "C:\\Users\\jenkins\\minikube-integration\\.minikube\\cache\\images\\k8s.gcr.io\\kube-apiserver_v1.22.2-rc.0" took 348.8839ms
	I0915 04:04:20.884406   24768 cache.go:81] save to tar file k8s.gcr.io/kube-apiserver:v1.22.2-rc.0 -> C:\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-apiserver_v1.22.2-rc.0 succeeded
	I0915 04:04:20.896185   24768 cache.go:108] acquiring lock: {Name:mk88d990826277d365279193b13d7ccdb5a17327 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 04:04:20.896688   24768 cache.go:116] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-scheduler_v1.22.2-rc.0 exists
	I0915 04:04:20.896688   24768 cache.go:97] cache image "k8s.gcr.io/kube-scheduler:v1.22.2-rc.0" -> "C:\\Users\\jenkins\\minikube-integration\\.minikube\\cache\\images\\k8s.gcr.io\\kube-scheduler_v1.22.2-rc.0" took 361.406ms
	I0915 04:04:20.897409   24768 cache.go:81] save to tar file k8s.gcr.io/kube-scheduler:v1.22.2-rc.0 -> C:\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-scheduler_v1.22.2-rc.0 succeeded
	I0915 04:04:20.906988   24768 cache.go:108] acquiring lock: {Name:mk135d3920b0a72b5911b7984928f84e3979d612 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 04:04:20.907750   24768 cache.go:116] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\pause_3.5 exists
	I0915 04:04:20.908037   24768 cache.go:97] cache image "k8s.gcr.io/pause:3.5" -> "C:\\Users\\jenkins\\minikube-integration\\.minikube\\cache\\images\\k8s.gcr.io\\pause_3.5" took 374.1448ms
	I0915 04:04:20.908037   24768 cache.go:81] save to tar file k8s.gcr.io/pause:3.5 -> C:\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\pause_3.5 succeeded
	I0915 04:04:20.910426   24768 cache.go:108] acquiring lock: {Name:mked3c3664ca64237f33cb43644f260d5961b8dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 04:04:20.910848   24768 cache.go:116] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\etcd_3.5.0-0 exists
	I0915 04:04:20.911140   24768 cache.go:97] cache image "k8s.gcr.io/etcd:3.5.0-0" -> "C:\\Users\\jenkins\\minikube-integration\\.minikube\\cache\\images\\k8s.gcr.io\\etcd_3.5.0-0" took 375.4417ms
	I0915 04:04:20.911140   24768 cache.go:81] save to tar file k8s.gcr.io/etcd:3.5.0-0 -> C:\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\etcd_3.5.0-0 succeeded
	I0915 04:04:20.917854   24768 cache.go:108] acquiring lock: {Name:mk70adbbcf3bbef3faf39d533cb99c37f4aae0b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 04:04:20.917854   24768 cache.go:108] acquiring lock: {Name:mk17023a3d51200ef78bf81556863d010272f465 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 04:04:20.918312   24768 cache.go:116] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.22.2-rc.0 exists
	I0915 04:04:20.918653   24768 cache.go:97] cache image "k8s.gcr.io/kube-controller-manager:v1.22.2-rc.0" -> "C:\\Users\\jenkins\\minikube-integration\\.minikube\\cache\\images\\k8s.gcr.io\\kube-controller-manager_v1.22.2-rc.0" took 384.2551ms
	I0915 04:04:20.918653   24768 cache.go:81] save to tar file k8s.gcr.io/kube-controller-manager:v1.22.2-rc.0 -> C:\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.22.2-rc.0 succeeded
	I0915 04:04:20.918653   24768 cache.go:116] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\coredns\coredns_v1.8.4 exists
	I0915 04:04:20.919415   24768 cache.go:97] cache image "k8s.gcr.io/coredns/coredns:v1.8.4" -> "C:\\Users\\jenkins\\minikube-integration\\.minikube\\cache\\images\\k8s.gcr.io\\coredns\\coredns_v1.8.4" took 385.5229ms
	I0915 04:04:20.919415   24768 cache.go:81] save to tar file k8s.gcr.io/coredns/coredns:v1.8.4 -> C:\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\coredns\coredns_v1.8.4 succeeded
	I0915 04:04:20.919579   24768 cache.go:108] acquiring lock: {Name:mkcbba06c099fa67c03e9375ab41c3707a41a063 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 04:04:20.919870   24768 cache.go:108] acquiring lock: {Name:mkdbe7027ff5da973630f19971c02c88abe3f308 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 04:04:20.920058   24768 cache.go:116] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\metrics-scraper_v1.0.4 exists
	I0915 04:04:20.920456   24768 cache.go:116] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-proxy_v1.22.2-rc.0 exists
	I0915 04:04:20.920601   24768 cache.go:97] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.4" -> "C:\\Users\\jenkins\\minikube-integration\\.minikube\\cache\\images\\docker.io\\kubernetesui\\metrics-scraper_v1.0.4" took 386.7412ms
	I0915 04:04:20.920601   24768 cache.go:81] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.4 -> C:\Users\jenkins\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\metrics-scraper_v1.0.4 succeeded
	I0915 04:04:20.920601   24768 cache.go:97] cache image "k8s.gcr.io/kube-proxy:v1.22.2-rc.0" -> "C:\\Users\\jenkins\\minikube-integration\\.minikube\\cache\\images\\k8s.gcr.io\\kube-proxy_v1.22.2-rc.0" took 385.0785ms
	I0915 04:04:20.920601   24768 cache.go:81] save to tar file k8s.gcr.io/kube-proxy:v1.22.2-rc.0 -> C:\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-proxy_v1.22.2-rc.0 succeeded
	I0915 04:04:20.936363   24768 cache.go:108] acquiring lock: {Name:mkbd69c89f5d4341beed10f900f1632dd59716b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 04:04:20.937023   24768 cache.go:116] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\minikube-integration\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I0915 04:04:20.937789   24768 cache.go:97] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins\\minikube-integration\\.minikube\\cache\\images\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 401.945ms
	I0915 04:04:20.937789   24768 cache.go:81] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins\minikube-integration\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I0915 04:04:20.937789   24768 cache.go:88] Successfully saved all images to host disk.
	I0915 04:04:21.644715   24768 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 in local docker daemon, skipping pull
	I0915 04:04:21.644860   24768 cache.go:140] gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 exists in daemon, skipping load
	I0915 04:04:21.644860   24768 cache.go:206] Successfully downloaded all kic artifacts
	I0915 04:04:21.645067   24768 start.go:313] acquiring machines lock for no-preload-20210915034542-22140: {Name:mkfe70f75561e47edbbcf8ba6de4f795d94687a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 04:04:21.645337   24768 start.go:317] acquired machines lock for "no-preload-20210915034542-22140" in 270.3µs
	I0915 04:04:21.645642   24768 start.go:93] Skipping create...Using existing machine configuration
	I0915 04:04:21.645849   24768 fix.go:55] fixHost starting: 
	I0915 04:04:21.671372   24768 cli_runner.go:115] Run: docker container inspect no-preload-20210915034542-22140 --format={{.State.Status}}
	I0915 04:04:22.560398   24768 fix.go:108] recreateIfNeeded on no-preload-20210915034542-22140: state=Stopped err=<nil>
	W0915 04:04:22.561184   24768 fix.go:134] unexpected machine state, will restart: <nil>
	I0915 04:04:22.565549   24768 out.go:177] * Restarting existing docker container for "no-preload-20210915034542-22140" ...
	I0915 04:04:22.577128   24768 cli_runner.go:115] Run: docker start no-preload-20210915034542-22140
	I0915 04:04:28.023106   24768 cli_runner.go:168] Completed: docker start no-preload-20210915034542-22140: (5.4458762s)
	I0915 04:04:28.043109   24768 cli_runner.go:115] Run: docker container inspect no-preload-20210915034542-22140 --format={{.State.Status}}
	I0915 04:04:29.005327   24768 kic.go:420] container "no-preload-20210915034542-22140" state is running.
	I0915 04:04:29.044413   24768 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20210915034542-22140
	I0915 04:04:29.904888   24768 profile.go:148] Saving config to C:\Users\jenkins\minikube-integration\.minikube\profiles\no-preload-20210915034542-22140\config.json ...
	I0915 04:04:29.910303   24768 machine.go:88] provisioning docker machine ...
	I0915 04:04:29.910743   24768 ubuntu.go:169] provisioning hostname "no-preload-20210915034542-22140"
	I0915 04:04:29.925419   24768 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210915034542-22140
	I0915 04:04:30.992028   24768 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210915034542-22140: (1.0663683s)
	I0915 04:04:30.999469   24768 main.go:130] libmachine: Using SSH client type: native
	I0915 04:04:31.000139   24768 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x158acc0] 0x158db80 <nil>  [] 0s} 127.0.0.1 59512 <nil> <nil>}
	I0915 04:04:31.000139   24768 main.go:130] libmachine: About to run SSH command:
	sudo hostname no-preload-20210915034542-22140 && echo "no-preload-20210915034542-22140" | sudo tee /etc/hostname
	I0915 04:04:31.075310   24768 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0915 04:04:34.098752   24768 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0915 04:04:38.307496   24768 main.go:130] libmachine: SSH cmd err, output: <nil>: no-preload-20210915034542-22140
	
	I0915 04:04:38.330229   24768 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210915034542-22140
	I0915 04:04:39.264135   24768 main.go:130] libmachine: Using SSH client type: native
	I0915 04:04:39.264838   24768 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x158acc0] 0x158db80 <nil>  [] 0s} 127.0.0.1 59512 <nil> <nil>}
	I0915 04:04:39.264838   24768 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-20210915034542-22140' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-20210915034542-22140/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-20210915034542-22140' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0915 04:04:40.712455   24768 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0915 04:04:40.712455   24768 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins\minikube-integration\.minikube CaCertPath:C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins\minikube-integration\.minikube}
	I0915 04:04:40.712763   24768 ubuntu.go:177] setting up certificates
	I0915 04:04:40.712763   24768 provision.go:83] configureAuth start
	I0915 04:04:40.733327   24768 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20210915034542-22140
	I0915 04:04:41.677961   24768 provision.go:138] copyHostCerts
	I0915 04:04:41.678894   24768 exec_runner.go:145] found C:\Users\jenkins\minikube-integration\.minikube/ca.pem, removing ...
	I0915 04:04:41.679041   24768 exec_runner.go:208] rm: C:\Users\jenkins\minikube-integration\.minikube\ca.pem
	I0915 04:04:41.679878   24768 exec_runner.go:152] cp: C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0915 04:04:41.683182   24768 exec_runner.go:145] found C:\Users\jenkins\minikube-integration\.minikube/cert.pem, removing ...
	I0915 04:04:41.683364   24768 exec_runner.go:208] rm: C:\Users\jenkins\minikube-integration\.minikube\cert.pem
	I0915 04:04:41.684232   24768 exec_runner.go:152] cp: C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0915 04:04:41.686936   24768 exec_runner.go:145] found C:\Users\jenkins\minikube-integration\.minikube/key.pem, removing ...
	I0915 04:04:41.687183   24768 exec_runner.go:208] rm: C:\Users\jenkins\minikube-integration\.minikube\key.pem
	I0915 04:04:41.688040   24768 exec_runner.go:152] cp: C:\Users\jenkins\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins\minikube-integration\.minikube/key.pem (1679 bytes)
	I0915 04:04:41.690150   24768 provision.go:112] generating server cert: C:\Users\jenkins\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.no-preload-20210915034542-22140 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-20210915034542-22140]
	I0915 04:04:42.354640   24768 provision.go:172] copyRemoteCerts
	I0915 04:04:42.368650   24768 ssh_runner.go:152] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0915 04:04:42.377641   24768 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210915034542-22140
	I0915 04:04:43.359506   24768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59512 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\no-preload-20210915034542-22140\id_rsa Username:docker}
	I0915 04:04:44.036876   24768 ssh_runner.go:192] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.6680234s)
	I0915 04:04:44.037340   24768 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0915 04:04:44.348262   24768 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1265 bytes)
	I0915 04:04:44.775898   24768 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0915 04:04:45.302757   24768 provision.go:86] duration metric: configureAuth took 4.589874s
	I0915 04:04:45.302757   24768 ubuntu.go:193] setting minikube options for container-runtime
	I0915 04:04:45.304165   24768 config.go:177] Loaded profile config "no-preload-20210915034542-22140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.2-rc.0
	I0915 04:04:45.323365   24768 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210915034542-22140
	I0915 04:04:46.180140   24768 main.go:130] libmachine: Using SSH client type: native
	I0915 04:04:46.181100   24768 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x158acc0] 0x158db80 <nil>  [] 0s} 127.0.0.1 59512 <nil> <nil>}
	I0915 04:04:46.181100   24768 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0915 04:04:47.279753   24768 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0915 04:04:47.279870   24768 ubuntu.go:71] root file system type: overlay
	I0915 04:04:47.280592   24768 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0915 04:04:47.303259   24768 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210915034542-22140
	I0915 04:04:48.265216   24768 main.go:130] libmachine: Using SSH client type: native
	I0915 04:04:48.266020   24768 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x158acc0] 0x158db80 <nil>  [] 0s} 127.0.0.1 59512 <nil> <nil>}
	I0915 04:04:48.266020   24768 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0915 04:04:49.488368   24768 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0915 04:04:49.499368   24768 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210915034542-22140
	I0915 04:04:50.501180   24768 main.go:130] libmachine: Using SSH client type: native
	I0915 04:04:50.501983   24768 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x158acc0] 0x158db80 <nil>  [] 0s} 127.0.0.1 59512 <nil> <nil>}
	I0915 04:04:50.501983   24768 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0915 04:04:51.318832   24768 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0915 04:04:51.318832   24768 machine.go:91] provisioned docker machine in 21.4082666s
	I0915 04:04:51.319829   24768 start.go:267] post-start starting for "no-preload-20210915034542-22140" (driver="docker")
	I0915 04:04:51.319829   24768 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0915 04:04:51.332831   24768 ssh_runner.go:152] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0915 04:04:51.342858   24768 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210915034542-22140
	I0915 04:04:52.258325   24768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59512 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\no-preload-20210915034542-22140\id_rsa Username:docker}
	I0915 04:04:53.024209   24768 ssh_runner.go:192] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.6913842s)
	I0915 04:04:53.045424   24768 ssh_runner.go:152] Run: cat /etc/os-release
	I0915 04:04:53.162304   24768 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0915 04:04:53.162304   24768 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0915 04:04:53.162487   24768 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0915 04:04:53.162487   24768 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0915 04:04:53.162601   24768 filesync.go:126] Scanning C:\Users\jenkins\minikube-integration\.minikube\addons for local assets ...
	I0915 04:04:53.163152   24768 filesync.go:126] Scanning C:\Users\jenkins\minikube-integration\.minikube\files for local assets ...
	I0915 04:04:53.164680   24768 filesync.go:149] local asset: C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\221402.pem -> 221402.pem in /etc/ssl/certs
	I0915 04:04:53.188814   24768 ssh_runner.go:152] Run: sudo mkdir -p /etc/ssl/certs
	I0915 04:04:53.599060   24768 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\221402.pem --> /etc/ssl/certs/221402.pem (1708 bytes)
	I0915 04:04:54.165734   24768 start.go:270] post-start completed in 2.8459153s
	I0915 04:04:54.196234   24768 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 04:04:54.209571   24768 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210915034542-22140
	I0915 04:04:55.175765   24768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59512 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\no-preload-20210915034542-22140\id_rsa Username:docker}
	I0915 04:04:55.688433   24768 ssh_runner.go:192] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.4922035s)
	I0915 04:04:55.688749   24768 fix.go:57] fixHost completed within 34.0430176s
	I0915 04:04:55.688749   24768 start.go:80] releasing machines lock for "no-preload-20210915034542-22140", held for 34.0435299s
	I0915 04:04:55.706227   24768 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20210915034542-22140
	I0915 04:04:56.578796   24768 ssh_runner.go:152] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0915 04:04:56.591628   24768 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210915034542-22140
	I0915 04:04:56.593897   24768 ssh_runner.go:152] Run: systemctl --version
	I0915 04:04:56.610353   24768 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210915034542-22140
	I0915 04:04:57.542993   24768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59512 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\no-preload-20210915034542-22140\id_rsa Username:docker}
	I0915 04:04:57.607779   24768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59512 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\no-preload-20210915034542-22140\id_rsa Username:docker}
	I0915 04:04:58.406705   24768 ssh_runner.go:192] Completed: systemctl --version: (1.8126317s)
	I0915 04:04:58.421453   24768 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service containerd
	I0915 04:04:58.967539   24768 ssh_runner.go:192] Completed: curl -sS -m 2 https://k8s.gcr.io/: (2.3887506s)
	I0915 04:04:58.982227   24768 ssh_runner.go:152] Run: sudo systemctl cat docker.service
	I0915 04:04:59.239194   24768 cruntime.go:255] skipping containerd shutdown because we are bound to it
	I0915 04:04:59.255183   24768 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service crio
	I0915 04:04:59.449179   24768 ssh_runner.go:152] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0915 04:04:59.764453   24768 ssh_runner.go:152] Run: sudo systemctl unmask docker.service
	I0915 04:05:01.515332   24768 ssh_runner.go:192] Completed: sudo systemctl unmask docker.service: (1.7506989s)
	I0915 04:05:01.555634   24768 ssh_runner.go:152] Run: sudo systemctl enable docker.socket
	I0915 04:05:02.906141   24768 ssh_runner.go:192] Completed: sudo systemctl enable docker.socket: (1.3505119s)
	I0915 04:05:02.920794   24768 ssh_runner.go:152] Run: sudo systemctl cat docker.service
	I0915 04:05:03.111986   24768 ssh_runner.go:152] Run: sudo systemctl daemon-reload
	I0915 04:05:04.238900   24768 ssh_runner.go:192] Completed: sudo systemctl daemon-reload: (1.1269176s)
	I0915 04:05:04.255258   24768 ssh_runner.go:152] Run: sudo systemctl start docker
	I0915 04:05:04.413764   24768 ssh_runner.go:152] Run: docker version --format {{.Server.Version}}
	I0915 04:05:05.516449   24768 ssh_runner.go:192] Completed: docker version --format {{.Server.Version}}: (1.102443s)
	I0915 04:05:05.528833   24768 ssh_runner.go:152] Run: docker version --format {{.Server.Version}}
	I0915 04:05:06.743195   24768 ssh_runner.go:192] Completed: docker version --format {{.Server.Version}}: (1.2143662s)
	I0915 04:05:06.746470   24768 out.go:204] * Preparing Kubernetes v1.22.2-rc.0 on Docker 20.10.8 ...
	I0915 04:05:06.759235   24768 cli_runner.go:115] Run: docker exec -t no-preload-20210915034542-22140 dig +short host.docker.internal
	I0915 04:05:08.647163   24768 cli_runner.go:168] Completed: docker exec -t no-preload-20210915034542-22140 dig +short host.docker.internal: (1.8879335s)
	I0915 04:05:08.647396   24768 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0915 04:05:08.664148   24768 ssh_runner.go:152] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0915 04:05:08.723865   24768 ssh_runner.go:152] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 04:05:08.920979   24768 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-20210915034542-22140
	I0915 04:05:09.942028   24768 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-20210915034542-22140: (1.0210529s)
	I0915 04:05:09.943027   24768 preload.go:131] Checking if preload exists for k8s version v1.22.2-rc.0 and runtime docker
	I0915 04:05:09.954890   24768 ssh_runner.go:152] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0915 04:05:10.884284   24768 docker.go:558] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.22.2-rc.0
	k8s.gcr.io/kube-scheduler:v1.22.2-rc.0
	k8s.gcr.io/kube-controller-manager:v1.22.2-rc.0
	k8s.gcr.io/kube-proxy:v1.22.2-rc.0
	k8s.gcr.io/etcd:3.5.0-0
	k8s.gcr.io/coredns/coredns:v1.8.4
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.5
	kubernetesui/dashboard:v2.1.0
	kubernetesui/metrics-scraper:v1.0.4
	busybox:1.28.4-glibc
	
	-- /stdout --
	I0915 04:05:10.884284   24768 cache_images.go:78] Images are preloaded, skipping loading
	I0915 04:05:10.897171   24768 ssh_runner.go:152] Run: docker info --format {{.CgroupDriver}}
	I0915 04:05:13.388953   24768 ssh_runner.go:192] Completed: docker info --format {{.CgroupDriver}}: (2.491791s)
	I0915 04:05:13.389379   24768 cni.go:93] Creating CNI manager for ""
	I0915 04:05:13.389506   24768 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0915 04:05:13.389678   24768 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0915 04:05:13.389678   24768 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.22.2-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-20210915034542-22140 NodeName:no-preload-20210915034542-22140 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAF
ile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0915 04:05:13.390264   24768 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "no-preload-20210915034542-22140"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.22.2-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0915 04:05:13.390788   24768 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.22.2-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=no-preload-20210915034542-22140 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.22.2-rc.0 ClusterName:no-preload-20210915034542-22140 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0915 04:05:13.410904   24768 ssh_runner.go:152] Run: sudo ls /var/lib/minikube/binaries/v1.22.2-rc.0
	I0915 04:05:13.534664   24768 binaries.go:44] Found k8s binaries, skipping transfer
	I0915 04:05:13.556408   24768 ssh_runner.go:152] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0915 04:05:13.675086   24768 ssh_runner.go:319] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0915 04:05:13.909098   24768 ssh_runner.go:319] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0915 04:05:14.144536   24768 ssh_runner.go:319] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2079 bytes)
	I0915 04:05:14.332714   24768 ssh_runner.go:152] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0915 04:05:14.395955   24768 ssh_runner.go:152] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 04:05:14.583166   24768 certs.go:52] Setting up C:\Users\jenkins\minikube-integration\.minikube\profiles\no-preload-20210915034542-22140 for IP: 192.168.85.2
	I0915 04:05:14.583869   24768 certs.go:179] skipping minikubeCA CA generation: C:\Users\jenkins\minikube-integration\.minikube\ca.key
	I0915 04:05:14.584467   24768 certs.go:179] skipping proxyClientCA CA generation: C:\Users\jenkins\minikube-integration\.minikube\proxy-client-ca.key
	I0915 04:05:14.585139   24768 certs.go:293] skipping minikube-user signed cert generation: C:\Users\jenkins\minikube-integration\.minikube\profiles\no-preload-20210915034542-22140\client.key
	I0915 04:05:14.585881   24768 certs.go:293] skipping minikube signed cert generation: C:\Users\jenkins\minikube-integration\.minikube\profiles\no-preload-20210915034542-22140\apiserver.key.43b9df8c
	I0915 04:05:14.586692   24768 certs.go:293] skipping aggregator signed cert generation: C:\Users\jenkins\minikube-integration\.minikube\profiles\no-preload-20210915034542-22140\proxy-client.key
	I0915 04:05:14.588798   24768 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\22140.pem (1338 bytes)
	W0915 04:05:14.589978   24768 certs.go:372] ignoring C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\22140_empty.pem, impossibly tiny 0 bytes
	I0915 04:05:14.589978   24768 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0915 04:05:14.590428   24768 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0915 04:05:14.591320   24768 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0915 04:05:14.591687   24768 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0915 04:05:14.592421   24768 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\221402.pem (1708 bytes)
	I0915 04:05:14.599679   24768 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\no-preload-20210915034542-22140\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0915 04:05:14.851048   24768 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\no-preload-20210915034542-22140\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0915 04:05:15.046746   24768 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\no-preload-20210915034542-22140\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0915 04:05:15.394913   24768 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\no-preload-20210915034542-22140\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0915 04:05:15.749211   24768 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0915 04:05:16.045186   24768 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0915 04:05:16.350626   24768 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0915 04:05:16.840950   24768 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I0915 04:05:17.161470   24768 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\221402.pem --> /usr/share/ca-certificates/221402.pem (1708 bytes)
	I0915 04:05:17.437196   24768 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0915 04:05:17.709689   24768 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\certs\22140.pem --> /usr/share/ca-certificates/22140.pem (1338 bytes)
	I0915 04:05:18.075952   24768 ssh_runner.go:319] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0915 04:05:18.377328   24768 ssh_runner.go:152] Run: openssl version
	I0915 04:05:18.501049   24768 ssh_runner.go:152] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221402.pem && ln -fs /usr/share/ca-certificates/221402.pem /etc/ssl/certs/221402.pem"
	I0915 04:05:18.656523   24768 ssh_runner.go:152] Run: ls -la /usr/share/ca-certificates/221402.pem
	I0915 04:05:18.721823   24768 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Sep 15 01:56 /usr/share/ca-certificates/221402.pem
	I0915 04:05:18.737513   24768 ssh_runner.go:152] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221402.pem
	I0915 04:05:18.834378   24768 ssh_runner.go:152] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221402.pem /etc/ssl/certs/3ec20f2e.0"
	I0915 04:05:18.989017   24768 ssh_runner.go:152] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0915 04:05:19.165674   24768 ssh_runner.go:152] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0915 04:05:19.250505   24768 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Sep 15 01:33 /usr/share/ca-certificates/minikubeCA.pem
	I0915 04:05:19.266963   24768 ssh_runner.go:152] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0915 04:05:19.336560   24768 ssh_runner.go:152] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0915 04:05:19.485977   24768 ssh_runner.go:152] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22140.pem && ln -fs /usr/share/ca-certificates/22140.pem /etc/ssl/certs/22140.pem"
	I0915 04:05:19.630772   24768 ssh_runner.go:152] Run: ls -la /usr/share/ca-certificates/22140.pem
	I0915 04:05:19.669559   24768 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Sep 15 01:56 /usr/share/ca-certificates/22140.pem
	I0915 04:05:19.684531   24768 ssh_runner.go:152] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22140.pem
	I0915 04:05:19.777279   24768 ssh_runner.go:152] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22140.pem /etc/ssl/certs/51391683.0"
	I0915 04:05:19.871193   24768 kubeadm.go:390] StartCluster: {Name:no-preload-20210915034542-22140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.2-rc.0 ClusterName:no-preload-20210915034542-22140 Namespace:default APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.22.2-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeReque
sted:false ExtraDisks:0}
	I0915 04:05:19.888259   24768 ssh_runner.go:152] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0915 04:05:20.347977   24768 ssh_runner.go:152] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0915 04:05:20.481919   24768 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0915 04:05:20.481919   24768 kubeadm.go:600] restartCluster start
	I0915 04:05:20.496941   24768 ssh_runner.go:152] Run: sudo test -d /data/minikube
	I0915 04:05:20.644174   24768 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0915 04:05:20.656184   24768 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-20210915034542-22140
	I0915 04:05:21.585728   24768 kubeconfig.go:117] verify returned: extract IP: "no-preload-20210915034542-22140" does not appear in C:\Users\jenkins\minikube-integration\kubeconfig
	I0915 04:05:21.588042   24768 kubeconfig.go:128] "no-preload-20210915034542-22140" context is missing from C:\Users\jenkins\minikube-integration\kubeconfig - will repair!
	I0915 04:05:21.589944   24768 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\kubeconfig: {Name:mk312e0248780fd448f3a83862df8ee597f47373 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 04:05:21.704477   24768 ssh_runner.go:152] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0915 04:05:21.835627   24768 api_server.go:164] Checking apiserver status ...
	I0915 04:05:21.856091   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0915 04:05:22.062444   24768 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0915 04:05:22.263099   24768 api_server.go:164] Checking apiserver status ...
	I0915 04:05:22.287767   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0915 04:05:22.464405   24768 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0915 04:05:22.663824   24768 api_server.go:164] Checking apiserver status ...
	I0915 04:05:22.679400   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0915 04:05:22.842827   24768 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0915 04:05:22.863150   24768 api_server.go:164] Checking apiserver status ...
	I0915 04:05:22.885894   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0915 04:05:23.022132   24768 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0915 04:05:23.063902   24768 api_server.go:164] Checking apiserver status ...
	I0915 04:05:23.081398   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0915 04:05:23.250515   24768 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0915 04:05:23.263198   24768 api_server.go:164] Checking apiserver status ...
	I0915 04:05:23.280376   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0915 04:05:23.458877   24768 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0915 04:05:23.463237   24768 api_server.go:164] Checking apiserver status ...
	I0915 04:05:23.479121   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0915 04:05:23.623173   24768 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0915 04:05:23.663428   24768 api_server.go:164] Checking apiserver status ...
	I0915 04:05:23.710820   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0915 04:05:23.857219   24768 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0915 04:05:23.863370   24768 api_server.go:164] Checking apiserver status ...
	I0915 04:05:23.879894   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0915 04:05:24.023911   24768 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0915 04:05:24.063730   24768 api_server.go:164] Checking apiserver status ...
	I0915 04:05:24.085491   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0915 04:05:24.243565   24768 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0915 04:05:24.262793   24768 api_server.go:164] Checking apiserver status ...
	I0915 04:05:24.278830   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0915 04:05:24.399081   24768 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0915 04:05:24.463397   24768 api_server.go:164] Checking apiserver status ...
	I0915 04:05:24.480454   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0915 04:05:24.667226   24768 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0915 04:05:24.864602   24768 api_server.go:164] Checking apiserver status ...
	I0915 04:05:24.881268   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0915 04:05:25.024008   24768 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0915 04:05:25.063647   24768 api_server.go:164] Checking apiserver status ...
	I0915 04:05:25.075456   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0915 04:05:25.234406   24768 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0915 04:05:25.234657   24768 api_server.go:164] Checking apiserver status ...
	I0915 04:05:25.250918   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0915 04:05:25.448205   24768 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0915 04:05:25.448205   24768 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0915 04:05:25.448205   24768 kubeadm.go:1032] stopping kube-system containers ...
	I0915 04:05:25.460921   24768 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0915 04:05:26.379424   24768 docker.go:390] Stopping containers: [6e9b5da4f57b 6375a9dada5a 60e36845bf18 cac0214ef61e e6bb60c10d48 ae18662f4926 4648cfc70282 9195c3073fcc 9d9cf0d266e2 68ed2306d7cd fc4bace80c26 bd8ffc0bef2a 5f0e310c1caf 24993fd21fd4 a80736cb41a6 83655db2430d]
	I0915 04:05:26.392373   24768 ssh_runner.go:152] Run: docker stop 6e9b5da4f57b 6375a9dada5a 60e36845bf18 cac0214ef61e e6bb60c10d48 ae18662f4926 4648cfc70282 9195c3073fcc 9d9cf0d266e2 68ed2306d7cd fc4bace80c26 bd8ffc0bef2a 5f0e310c1caf 24993fd21fd4 a80736cb41a6 83655db2430d
	I0915 04:05:27.106287   24768 ssh_runner.go:152] Run: sudo systemctl stop kubelet
	I0915 04:05:27.214688   24768 ssh_runner.go:152] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0915 04:05:27.373033   24768 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5643 Sep 15 03:57 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Sep 15 03:57 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 Sep 15 03:59 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Sep 15 03:57 /etc/kubernetes/scheduler.conf
	
	I0915 04:05:27.393620   24768 ssh_runner.go:152] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0915 04:05:27.539708   24768 ssh_runner.go:152] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0915 04:05:27.656024   24768 ssh_runner.go:152] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0915 04:05:27.790059   24768 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0915 04:05:27.810527   24768 ssh_runner.go:152] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0915 04:05:27.974337   24768 ssh_runner.go:152] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0915 04:05:28.122905   24768 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0915 04:05:28.146037   24768 ssh_runner.go:152] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0915 04:05:28.264343   24768 ssh_runner.go:152] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0915 04:05:28.376217   24768 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0915 04:05:28.376324   24768 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.2-rc.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0915 04:05:29.233758   24768 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.2-rc.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0915 04:05:36.965392   24768 ssh_runner.go:192] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.2-rc.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (7.7314256s)
	I0915 04:05:36.965599   24768 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.2-rc.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0915 04:05:39.221668   24768 ssh_runner.go:192] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.2-rc.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml": (2.2560774s)
	I0915 04:05:39.221668   24768 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.2-rc.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0915 04:05:41.016000   24768 ssh_runner.go:192] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.2-rc.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml": (1.7943379s)
	I0915 04:05:41.016165   24768 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.2-rc.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0915 04:05:42.617292   24768 ssh_runner.go:192] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.2-rc.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml": (1.6010302s)
	I0915 04:05:42.617454   24768 api_server.go:50] waiting for apiserver process to appear ...
	I0915 04:05:42.636559   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:05:43.494506   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:05:43.994320   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:05:44.499737   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:05:44.996121   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:05:45.494111   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:05:46.005119   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:05:46.506020   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:05:46.996944   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:05:47.500042   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:05:48.495585   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:05:48.993835   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:05:50.013613   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:05:50.498186   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:05:50.997965   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:05:51.498561   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:05:52.004713   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:05:52.996333   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:05:53.994805   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:05:54.495885   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:05:54.995674   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:05:56.003146   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:05:56.492984   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:05:56.995725   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:05:57.490669   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:05:58.504631   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:05:58.994060   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:05:59.496400   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:06:00.001118   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:06:00.996976   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:06:01.516219   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:06:02.509261   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:06:03.022943   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:06:03.498057   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:06:03.994605   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:06:04.518778   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:06:05.490952   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:06:06.496034   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:06:07.002920   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:06:07.995577   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:06:08.502037   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:06:09.011634   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:06:09.503498   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:06:09.995013   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:06:10.996171   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:06:12.004332   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:06:12.491648   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:06:13.499132   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:06:13.995325   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:06:15.083426   24768 ssh_runner.go:192] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.0881054s)
	I0915 04:06:15.492993   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:06:16.496921   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:06:16.992463   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:06:17.497096   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:06:18.006313   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:06:18.496206   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:06:18.992949   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:06:19.500017   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:06:19.991715   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:06:20.990178   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:06:21.494280   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:06:22.492320   24768 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:06:23.839401   24768 ssh_runner.go:192] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.3470854s)
	I0915 04:06:23.839401   24768 api_server.go:70] duration metric: took 41.2220869s to wait for apiserver process to appear ...
	I0915 04:06:23.839401   24768 api_server.go:86] waiting for apiserver healthz status ...
	I0915 04:06:23.839401   24768 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:59511/healthz ...
	I0915 04:06:23.868900   24768 api_server.go:255] stopped: https://127.0.0.1:59511/healthz: Get "https://127.0.0.1:59511/healthz": EOF
	I0915 04:06:24.370170   24768 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:59511/healthz ...
	I0915 04:06:29.372069   24768 api_server.go:255] stopped: https://127.0.0.1:59511/healthz: Get "https://127.0.0.1:59511/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 04:06:29.868954   24768 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:59511/healthz ...
	I0915 04:06:34.870086   24768 api_server.go:255] stopped: https://127.0.0.1:59511/healthz: Get "https://127.0.0.1:59511/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 04:06:35.368867   24768 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:59511/healthz ...
	I0915 04:06:40.372461   24768 api_server.go:255] stopped: https://127.0.0.1:59511/healthz: Get "https://127.0.0.1:59511/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 04:06:40.870136   24768 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:59511/healthz ...
	I0915 04:06:45.872875   24768 api_server.go:255] stopped: https://127.0.0.1:59511/healthz: Get "https://127.0.0.1:59511/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 04:06:46.369203   24768 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:59511/healthz ...
	I0915 04:06:46.717037   24768 api_server.go:265] https://127.0.0.1:59511/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0915 04:06:46.717037   24768 api_server.go:101] status: https://127.0.0.1:59511/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0915 04:06:46.869739   24768 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:59511/healthz ...
	I0915 04:06:47.035707   24768 api_server.go:265] https://127.0.0.1:59511/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0915 04:06:47.036079   24768 api_server.go:101] status: https://127.0.0.1:59511/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0915 04:06:47.368990   24768 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:59511/healthz ...
	I0915 04:06:47.520069   24768 api_server.go:265] https://127.0.0.1:59511/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0915 04:06:47.520069   24768 api_server.go:101] status: https://127.0.0.1:59511/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0915 04:06:47.869283   24768 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:59511/healthz ...
	I0915 04:06:47.982870   24768 api_server.go:265] https://127.0.0.1:59511/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0915 04:06:47.982870   24768 api_server.go:101] status: https://127.0.0.1:59511/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0915 04:06:48.369372   24768 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:59511/healthz ...
	I0915 04:06:48.423749   24768 api_server.go:265] https://127.0.0.1:59511/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0915 04:06:48.423749   24768 api_server.go:101] status: https://127.0.0.1:59511/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0915 04:06:48.869127   24768 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:59511/healthz ...
	I0915 04:06:48.941236   24768 api_server.go:265] https://127.0.0.1:59511/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0915 04:06:48.941236   24768 api_server.go:101] status: https://127.0.0.1:59511/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0915 04:06:49.368967   24768 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:59511/healthz ...
	I0915 04:06:49.468192   24768 api_server.go:265] https://127.0.0.1:59511/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0915 04:06:49.468755   24768 api_server.go:101] status: https://127.0.0.1:59511/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0915 04:06:49.869123   24768 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:59511/healthz ...
	I0915 04:06:49.943129   24768 api_server.go:265] https://127.0.0.1:59511/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0915 04:06:49.943381   24768 api_server.go:101] status: https://127.0.0.1:59511/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0915 04:06:50.369886   24768 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:59511/healthz ...
	I0915 04:06:50.520658   24768 api_server.go:265] https://127.0.0.1:59511/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0915 04:06:50.520965   24768 api_server.go:101] status: https://127.0.0.1:59511/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0915 04:06:50.869271   24768 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:59511/healthz ...
	I0915 04:06:50.927469   24768 api_server.go:265] https://127.0.0.1:59511/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0915 04:06:50.927686   24768 api_server.go:101] status: https://127.0.0.1:59511/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0915 04:06:51.370183   24768 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:59511/healthz ...
	I0915 04:06:51.437859   24768 api_server.go:265] https://127.0.0.1:59511/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0915 04:06:51.438886   24768 api_server.go:101] status: https://127.0.0.1:59511/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0915 04:06:51.870569   24768 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:59511/healthz ...
	I0915 04:06:51.957243   24768 api_server.go:265] https://127.0.0.1:59511/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0915 04:06:51.957442   24768 api_server.go:101] status: https://127.0.0.1:59511/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0915 04:06:52.369731   24768 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:59511/healthz ...
	I0915 04:06:52.514058   24768 api_server.go:265] https://127.0.0.1:59511/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0915 04:06:52.514241   24768 api_server.go:101] status: https://127.0.0.1:59511/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0915 04:06:52.870899   24768 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:59511/healthz ...
	I0915 04:06:52.916229   24768 api_server.go:265] https://127.0.0.1:59511/healthz returned 200:
	ok
	I0915 04:06:53.015061   24768 api_server.go:139] control plane version: v1.22.2-rc.0
	I0915 04:06:53.015236   24768 api_server.go:129] duration metric: took 29.1759343s to wait for apiserver health ...
	I0915 04:06:53.015236   24768 cni.go:93] Creating CNI manager for ""
	I0915 04:06:53.015236   24768 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0915 04:06:53.015236   24768 system_pods.go:43] waiting for kube-system pods to appear ...
	I0915 04:06:53.140094   24768 system_pods.go:59] 8 kube-system pods found
	I0915 04:06:53.140361   24768 system_pods.go:61] "coredns-78fcd69978-lh6bj" [ed42c11e-9cb4-4447-aca4-9223592474e1] Running
	I0915 04:06:53.140361   24768 system_pods.go:61] "etcd-no-preload-20210915034542-22140" [ff89400f-a284-4934-acad-801b51540b36] Running
	I0915 04:06:53.140361   24768 system_pods.go:61] "kube-apiserver-no-preload-20210915034542-22140" [1584d5d9-10f1-4545-a7b5-a53880caba92] Running
	I0915 04:06:53.140361   24768 system_pods.go:61] "kube-controller-manager-no-preload-20210915034542-22140" [f179095d-ce16-4ec6-bf44-b7c18bad2c43] Running
	I0915 04:06:53.140361   24768 system_pods.go:61] "kube-proxy-sm9hm" [cf8aa153-7233-45cf-b422-2a961b123e4f] Running
	I0915 04:06:53.140361   24768 system_pods.go:61] "kube-scheduler-no-preload-20210915034542-22140" [27ff0aaa-24c0-43ae-9297-ab0de3bdc9af] Running
	I0915 04:06:53.140361   24768 system_pods.go:61] "metrics-server-7c784ccb57-j54zl" [bc2ae82f-8845-408c-9184-7f274436f75a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0915 04:06:53.140361   24768 system_pods.go:61] "storage-provisioner" [76d70356-20f3-4520-8808-ea50f76a1c01] Running
	I0915 04:06:53.140361   24768 system_pods.go:74] duration metric: took 125.1247ms to wait for pod list to return data ...
	I0915 04:06:53.140623   24768 node_conditions.go:102] verifying NodePressure condition ...
	I0915 04:06:53.237505   24768 node_conditions.go:122] node storage ephemeral capacity is 65792556Ki
	I0915 04:06:53.237643   24768 node_conditions.go:123] node cpu capacity is 4
	I0915 04:06:53.237643   24768 node_conditions.go:105] duration metric: took 97.0202ms to run NodePressure ...
	I0915 04:06:53.237875   24768 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.2-rc.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0915 04:06:55.692142   24768 ssh_runner.go:192] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.2-rc.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (2.4542752s)
	I0915 04:06:55.692921   24768 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0915 04:06:55.777938   24768 retry.go:31] will retry after 276.165072ms: kubelet not initialised
	I0915 04:06:56.101161   24768 retry.go:31] will retry after 540.190908ms: kubelet not initialised
	I0915 04:06:56.682726   24768 retry.go:31] will retry after 655.06503ms: kubelet not initialised
	I0915 04:06:57.397822   24768 retry.go:31] will retry after 791.196345ms: kubelet not initialised
	I0915 04:06:58.257465   24768 retry.go:31] will retry after 1.170244332s: kubelet not initialised
	I0915 04:06:59.502776   24768 retry.go:31] will retry after 2.253109428s: kubelet not initialised
	I0915 04:07:01.798019   24768 retry.go:31] will retry after 1.610739793s: kubelet not initialised
	I0915 04:07:03.445255   24768 retry.go:31] will retry after 2.804311738s: kubelet not initialised
	I0915 04:07:06.283050   24768 retry.go:31] will retry after 3.824918958s: kubelet not initialised
	I0915 04:07:10.160868   24768 retry.go:31] will retry after 7.69743562s: kubelet not initialised
	I0915 04:07:18.080698   24768 kubeadm.go:746] kubelet initialised
	I0915 04:07:18.080698   24768 kubeadm.go:747] duration metric: took 22.3878534s waiting for restarted kubelet to initialise ...
	I0915 04:07:18.080913   24768 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 04:07:18.138038   24768 pod_ready.go:78] waiting up to 4m0s for pod "coredns-78fcd69978-lh6bj" in "kube-system" namespace to be "Ready" ...
	I0915 04:07:20.315472   24768 pod_ready.go:102] pod "coredns-78fcd69978-lh6bj" in "kube-system" namespace has status "Ready":"False"
	I0915 04:07:22.329999   24768 pod_ready.go:102] pod "coredns-78fcd69978-lh6bj" in "kube-system" namespace has status "Ready":"False"
	I0915 04:07:24.838862   24768 pod_ready.go:102] pod "coredns-78fcd69978-lh6bj" in "kube-system" namespace has status "Ready":"False"
	I0915 04:07:27.025920   24768 pod_ready.go:102] pod "coredns-78fcd69978-lh6bj" in "kube-system" namespace has status "Ready":"False"
	I0915 04:07:29.377017   24768 pod_ready.go:102] pod "coredns-78fcd69978-lh6bj" in "kube-system" namespace has status "Ready":"False"
	I0915 04:07:31.394782   24768 pod_ready.go:102] pod "coredns-78fcd69978-lh6bj" in "kube-system" namespace has status "Ready":"False"
	I0915 04:07:33.840586   24768 pod_ready.go:102] pod "coredns-78fcd69978-lh6bj" in "kube-system" namespace has status "Ready":"False"
	I0915 04:07:36.309583   24768 pod_ready.go:102] pod "coredns-78fcd69978-lh6bj" in "kube-system" namespace has status "Ready":"False"
	I0915 04:07:38.369658   24768 pod_ready.go:102] pod "coredns-78fcd69978-lh6bj" in "kube-system" namespace has status "Ready":"False"
	I0915 04:07:40.808435   24768 pod_ready.go:102] pod "coredns-78fcd69978-lh6bj" in "kube-system" namespace has status "Ready":"False"
	I0915 04:07:43.364642   24768 pod_ready.go:102] pod "coredns-78fcd69978-lh6bj" in "kube-system" namespace has status "Ready":"False"
	I0915 04:07:45.410504   24768 pod_ready.go:102] pod "coredns-78fcd69978-lh6bj" in "kube-system" namespace has status "Ready":"False"
	I0915 04:07:47.851961   24768 pod_ready.go:102] pod "coredns-78fcd69978-lh6bj" in "kube-system" namespace has status "Ready":"False"
	I0915 04:07:50.319061   24768 pod_ready.go:102] pod "coredns-78fcd69978-lh6bj" in "kube-system" namespace has status "Ready":"False"
	I0915 04:07:52.394742   24768 pod_ready.go:102] pod "coredns-78fcd69978-lh6bj" in "kube-system" namespace has status "Ready":"False"
	I0915 04:07:54.817433   24768 pod_ready.go:102] pod "coredns-78fcd69978-lh6bj" in "kube-system" namespace has status "Ready":"False"
	I0915 04:07:56.958516   24768 pod_ready.go:102] pod "coredns-78fcd69978-lh6bj" in "kube-system" namespace has status "Ready":"False"
	I0915 04:07:59.340751   24768 pod_ready.go:102] pod "coredns-78fcd69978-lh6bj" in "kube-system" namespace has status "Ready":"False"
	I0915 04:08:01.396425   24768 pod_ready.go:102] pod "coredns-78fcd69978-lh6bj" in "kube-system" namespace has status "Ready":"False"
	I0915 04:08:02.405603   24768 pod_ready.go:92] pod "coredns-78fcd69978-lh6bj" in "kube-system" namespace has status "Ready":"True"
	I0915 04:08:02.405933   24768 pod_ready.go:81] duration metric: took 44.2678933s waiting for pod "coredns-78fcd69978-lh6bj" in "kube-system" namespace to be "Ready" ...
	I0915 04:08:02.406004   24768 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-20210915034542-22140" in "kube-system" namespace to be "Ready" ...
	I0915 04:08:02.490526   24768 pod_ready.go:92] pod "etcd-no-preload-20210915034542-22140" in "kube-system" namespace has status "Ready":"True"
	I0915 04:08:02.490526   24768 pod_ready.go:81] duration metric: took 84.5215ms waiting for pod "etcd-no-preload-20210915034542-22140" in "kube-system" namespace to be "Ready" ...
	I0915 04:08:02.490526   24768 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-20210915034542-22140" in "kube-system" namespace to be "Ready" ...
	I0915 04:08:02.637646   24768 pod_ready.go:92] pod "kube-apiserver-no-preload-20210915034542-22140" in "kube-system" namespace has status "Ready":"True"
	I0915 04:08:02.637930   24768 pod_ready.go:81] duration metric: took 147.405ms waiting for pod "kube-apiserver-no-preload-20210915034542-22140" in "kube-system" namespace to be "Ready" ...
	I0915 04:08:02.637930   24768 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-20210915034542-22140" in "kube-system" namespace to be "Ready" ...
	I0915 04:08:02.775473   24768 pod_ready.go:92] pod "kube-controller-manager-no-preload-20210915034542-22140" in "kube-system" namespace has status "Ready":"True"
	I0915 04:08:02.775660   24768 pod_ready.go:81] duration metric: took 137.7311ms waiting for pod "kube-controller-manager-no-preload-20210915034542-22140" in "kube-system" namespace to be "Ready" ...
	I0915 04:08:02.775660   24768 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-sm9hm" in "kube-system" namespace to be "Ready" ...
	I0915 04:08:02.902532   24768 pod_ready.go:92] pod "kube-proxy-sm9hm" in "kube-system" namespace has status "Ready":"True"
	I0915 04:08:02.902532   24768 pod_ready.go:81] duration metric: took 126.8716ms waiting for pod "kube-proxy-sm9hm" in "kube-system" namespace to be "Ready" ...
	I0915 04:08:02.902532   24768 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-20210915034542-22140" in "kube-system" namespace to be "Ready" ...
	I0915 04:08:02.984502   24768 pod_ready.go:92] pod "kube-scheduler-no-preload-20210915034542-22140" in "kube-system" namespace has status "Ready":"True"
	I0915 04:08:02.984502   24768 pod_ready.go:81] duration metric: took 81.9704ms waiting for pod "kube-scheduler-no-preload-20210915034542-22140" in "kube-system" namespace to be "Ready" ...
	I0915 04:08:02.984733   24768 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace to be "Ready" ...
	I0915 04:08:05.214075   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:08:07.237113   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:08:09.642645   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:08:11.706579   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:08:14.191827   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:08:16.658881   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:08:19.171401   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:08:21.499408   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:08:23.701370   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:08:26.191507   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:08:28.664726   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:08:30.739744   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:08:33.209325   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:08:35.806934   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:08:38.288540   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:08:40.688002   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:08:42.977585   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:08:45.157060   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:08:47.185105   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:08:49.282874   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:08:51.648215   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:08:54.168724   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:08:56.254739   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:08:58.648263   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:09:00.658273   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:09:02.668039   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:09:04.770753   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:09:07.334987   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:09:09.631860   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:09:11.733697   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:09:13.757758   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:09:16.251969   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:09:18.717728   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:09:21.242311   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:09:23.298012   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:09:26.534861   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:09:28.838801   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:09:30.908970   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:09:33.379560   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:09:35.655990   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:09:38.040009   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:09:40.155562   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:09:42.175350   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:09:44.726524   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:09:46.818975   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:09:49.167142   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:09:51.346325   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:09:53.717885   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:09:56.247943   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:09:59.070035   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:01.167010   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:03.173358   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:05.215826   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:07.222806   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:10.433781   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:12.775722   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:15.191772   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:17.282052   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:19.689748   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:21.698902   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:23.752984   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:25.801768   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:28.347110   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:30.941444   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:33.292333   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:35.735469   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:38.173171   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:40.323157   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:42.689617   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:44.749218   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:46.795400   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:49.194669   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:51.263818   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:53.264205   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:55.728784   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:57.753584   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:00.154932   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:02.159882   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:04.553958   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:06.698100   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:09.196630   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:11.239533   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:13.799398   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:16.179578   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:18.384193   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:20.735115   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:23.192974   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:26.362785   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:28.408808   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:30.487576   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:32.761907   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:35.442032   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:37.569800   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:39.752025   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:42.471364   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:44.690384   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:46.691718   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:48.863280   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:51.192752   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:53.352560   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:55.358374   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:57.878674   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:12:00.145909   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:12:02.187089   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:12:03.095326   24768 pod_ready.go:81] duration metric: took 4m0.1114941s waiting for pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace to be "Ready" ...
	E0915 04:12:03.095494   24768 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace to be "Ready" (will not retry!)
	I0915 04:12:03.096444   24768 pod_ready.go:38] duration metric: took 4m45.0165876s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 04:12:03.096444   24768 kubeadm.go:604] restartCluster took 6m42.6159819s
	W0915 04:12:03.097266   24768 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0915 04:12:03.097612   24768 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.2-rc.0:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0915 04:13:46.333061   24768 ssh_runner.go:192] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.2-rc.0:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force": (1m43.2358136s)
	I0915 04:13:46.352416   24768 ssh_runner.go:152] Run: sudo systemctl stop -f kubelet
	I0915 04:13:46.476345   24768 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0915 04:13:46.897436   24768 ssh_runner.go:152] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0915 04:13:46.967431   24768 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0915 04:13:46.986429   24768 ssh_runner.go:152] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0915 04:13:47.105704   24768 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0915 04:13:47.107544   24768 ssh_runner.go:243] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.2-rc.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0915 04:15:31.047401   24768 out.go:204]   - Generating certificates and keys ...
	I0915 04:15:31.059304   24768 out.go:204]   - Booting up control plane ...
	I0915 04:15:31.064316   24768 out.go:204]   - Configuring RBAC rules ...
	I0915 04:15:31.068334   24768 cni.go:93] Creating CNI manager for ""
	I0915 04:15:31.068334   24768 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0915 04:15:31.074305   24768 ssh_runner.go:152] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0915 04:15:31.077358   24768 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 04:15:31.077358   24768 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2-rc.0/kubectl label nodes minikube.k8s.io/version=v1.23.0 minikube.k8s.io/commit=7d234465a435c40d154c10f5ac847cc10f4e5fc3 minikube.k8s.io/name=no-preload-20210915034542-22140 minikube.k8s.io/updated_at=2021_09_15T04_15_31_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 04:15:32.021740   24768 ops.go:34] apiserver oom_adj: -16
	I0915 04:15:33.248031   24768 ssh_runner.go:192] Completed: sudo /var/lib/minikube/binaries/v1.22.2-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig: (2.17068s)
	I0915 04:15:33.271350   24768 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 04:15:35.911652   24768 ssh_runner.go:192] Completed: sudo /var/lib/minikube/binaries/v1.22.2-rc.0/kubectl label nodes minikube.k8s.io/version=v1.23.0 minikube.k8s.io/commit=7d234465a435c40d154c10f5ac847cc10f4e5fc3 minikube.k8s.io/name=no-preload-20210915034542-22140 minikube.k8s.io/updated_at=2021_09_15T04_15_31_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig: (4.8343099s)
	I0915 04:15:36.232159   24768 ssh_runner.go:192] Completed: sudo /var/lib/minikube/binaries/v1.22.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (2.9605509s)
	I0915 04:15:36.743796   24768 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 04:15:38.317375   24768 ssh_runner.go:192] Completed: sudo /var/lib/minikube/binaries/v1.22.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (1.5735841s)
	I0915 04:15:38.750865   24768 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 04:15:39.748344   24768 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 04:15:40.799669   24768 ssh_runner.go:192] Completed: sudo /var/lib/minikube/binaries/v1.22.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (1.0513283s)
	I0915 04:15:41.255974   24768 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:244: failed to start minikube post-stop. args "out/minikube-windows-amd64.exe start -p no-preload-20210915034542-22140 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.22.2-rc.0": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: docker inspect <======

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
helpers_test.go:232: (dbg) Run:  docker inspect no-preload-20210915034542-22140

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
helpers_test.go:236: (dbg) docker inspect no-preload-20210915034542-22140:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6554d00497b297685ecea525f63cd4dae747508535cc3747f9308cbe78d28096",
	        "Created": "2021-09-15T03:51:08.4887746Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 235020,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-09-15T04:04:27.8892529Z",
	            "FinishedAt": "2021-09-15T04:04:06.6212602Z"
	        },
	        "Image": "sha256:83b5a81388468b1ffcd3874b4f24c1406c63c33ac07797cc8bed6ad0207d36a8",
	        "ResolvConfPath": "/var/lib/docker/containers/6554d00497b297685ecea525f63cd4dae747508535cc3747f9308cbe78d28096/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6554d00497b297685ecea525f63cd4dae747508535cc3747f9308cbe78d28096/hostname",
	        "HostsPath": "/var/lib/docker/containers/6554d00497b297685ecea525f63cd4dae747508535cc3747f9308cbe78d28096/hosts",
	        "LogPath": "/var/lib/docker/containers/6554d00497b297685ecea525f63cd4dae747508535cc3747f9308cbe78d28096/6554d00497b297685ecea525f63cd4dae747508535cc3747f9308cbe78d28096-json.log",
	        "Name": "/no-preload-20210915034542-22140",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-20210915034542-22140:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-20210915034542-22140",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7fdd1e60fce5bdd2b61bfe097633f64716a41f8373d974449d919b946c7eb78f-init/diff:/var/lib/docker/overlay2/81b5ed92bfb1e2a2a0e307c706b587bea810390dd4cdeffdaab53cb2bea532a6/diff:/var/lib/docker/overlay2/9560b70ae747eb38506ca99f7bdf1b19d69a399aa855bf6d066d5631b126dae0/diff:/var/lib/docker/overlay2/695fbfd66132a632f9cf21a1dbf1c4585ecf3d79d4ec664dc7322dbe57733e22/diff:/var/lib/docker/overlay2/db1f669858e6abde6d71803adf0e4dab516d446780d5e6b1fa82ed6e2c992d39/diff:/var/lib/docker/overlay2/fab89974c291c465525b131b7fd3c3d267c0435e58b67e536b1f5e99b0fe3552/diff:/var/lib/docker/overlay2/7d5946148c5ebf869abcd61af8cbd81254b96679a59bff1399fa76d06f970a03/diff:/var/lib/docker/overlay2/ac34ffb8ff292d487d8e0007c602732cac31fc43cc9dd73014f4f7f6731002e4/diff:/var/lib/docker/overlay2/c79772dfc8b60a34db55f8f7bdd7eb21bdb2ae1ebae9e19320eb82d243476de1/diff:/var/lib/docker/overlay2/5f0227571cb11adf4a20233b21288f6215d7ee4baa55da18a29c55f255c3f91b/diff:/var/lib/docker/overlay2/8f8a0a
55c9a3d7643b70fafbe1d581deef7a9142bb7504cade2efea33d17c8b6/diff:/var/lib/docker/overlay2/855d9e351347b1bfa0c8fcdd68ca509489970443ce6ac3f078a84319bbdbb0de/diff:/var/lib/docker/overlay2/d6da6485052539019c636fe8ca30537f92704bc855db6bb09a9228e17d5e5ee1/diff:/var/lib/docker/overlay2/3a712bb22c438ea19740b4d19771cd31cbd08e2f23647daf15e09967798d671d/diff:/var/lib/docker/overlay2/e8f4cc7b40bc0b3a9e62ea0d4f5ca169aab3e908980e13c881a98909769e05a7/diff:/var/lib/docker/overlay2/7364b0516116b13f8d51a574ea9312cc8be87bf0923e8ebe0018085133e57195/diff:/var/lib/docker/overlay2/10d8c9ca18bc3463470c25ce09aa92dc1df0366115c9fd5a22e67d1369e27b72/diff:/var/lib/docker/overlay2/e8ad5dbce212f833465ffdc136c8c744beb3bfe489d7f20f82084f854ab617cd/diff:/var/lib/docker/overlay2/391d7b820cdbb31a7bcc9bd350aff08e83bc2f5083fa09d2d7c1db69d1861b08/diff:/var/lib/docker/overlay2/394198ca9ba772f189cefae2c09414df3798734482a0159958ad4c74374079e8/diff:/var/lib/docker/overlay2/c3620c3c820e1cc79a02390c9ede0beacdc7fe42aa0e9564d27d6c793741eafe/diff:/var/lib/d
ocker/overlay2/9b11f1c010dca16f2c216392f2d3c5ec585e7d2ca91eb0a4824410accaba4ef3/diff:/var/lib/docker/overlay2/d8e94cabdfcf34c1c2ecb5355519daea41ba85e90131944f14c6c5faadb3f538/diff:/var/lib/docker/overlay2/335c17cc3e6bcc49659f681fefa84f63f496fab770f62dd31577690f8e3958b6/diff:/var/lib/docker/overlay2/5ef44871aef3ad96e532fdbc78e5379afd65c7ffd39bed734ed35daf134257b5/diff:/var/lib/docker/overlay2/ce73bde16589364238c0bb925bbd93f9b2b9c5e2f3267cc196298f62fbc08342/diff:/var/lib/docker/overlay2/461113b8bc693d226593885e543b82eac9a75ea77d0bcdaa60551cca12495538/diff:/var/lib/docker/overlay2/f7d47793cf5882d3e0b92ebb0d7d2456fc621d6db83cb2439f96c4b248b11d25/diff:/var/lib/docker/overlay2/a8e74e4377f38c1a50d9a335bfc92405a4df112abdcbd2555cbe3b592f071fd5/diff:/var/lib/docker/overlay2/405812e0a303b666cd7c1c0102d8f415494b9641e1f5ab9404e146c2265592cb/diff:/var/lib/docker/overlay2/deecfc978d174b5d2c0a209b450d0fa15828234099690cc9092c6ff67a1926d2/diff:/var/lib/docker/overlay2/6fa41c9e75c99fb82729fdd55e5653ce5b7edf256a1dd8791c3012cf210
7f486/diff:/var/lib/docker/overlay2/2dd2dde99da44abd645912f40fdb7d06e201a622cccf049222fa9a53ab6ca234/diff:/var/lib/docker/overlay2/a73187a91c6737ec4627be55f4b58dab9d4ef30412857cbf1cd6e6778962c9f4/diff:/var/lib/docker/overlay2/7fcd2796c0a1717ddf6c90aad88aff2e11a87b836d8761e756b6bc7a292ed570/diff:/var/lib/docker/overlay2/276597df229fc32d0d371563f135664fa4bef3fbc20372998b7b051504e6188a/diff:/var/lib/docker/overlay2/28f6cf4ea77b5f1df2373079b5b3c9b2ec7e95488cec51c54e7ff22f8fea2f36/diff:/var/lib/docker/overlay2/301627855ef95ac8b04f9b404290e80b6a94b9637ec2ca0c31b5701c6ac786fd/diff:/var/lib/docker/overlay2/a589a72c723642d2bb727fead8edfcaffaca10eed1bb4af32fac19fb6fc32874/diff:/var/lib/docker/overlay2/90d1c9e6fe8a1c74ac53d78f9a0b7ee36fc624becac59c2a6056c004ebe45e1e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7fdd1e60fce5bdd2b61bfe097633f64716a41f8373d974449d919b946c7eb78f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7fdd1e60fce5bdd2b61bfe097633f64716a41f8373d974449d919b946c7eb78f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7fdd1e60fce5bdd2b61bfe097633f64716a41f8373d974449d919b946c7eb78f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-20210915034542-22140",
	                "Source": "/var/lib/docker/volumes/no-preload-20210915034542-22140/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-20210915034542-22140",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-20210915034542-22140",
	                "name.minikube.sigs.k8s.io": "no-preload-20210915034542-22140",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "377493f54ed61749a23d6b745c28391cffac9ce566f9d1da545d5fa0a89556ea",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59512"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59513"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59514"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59515"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59511"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/377493f54ed6",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-20210915034542-22140": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6554d00497b2",
	                        "no-preload-20210915034542-22140"
	                    ],
	                    "NetworkID": "6c4eab0f4ded63939e5b1789133d59b6d851c1748c6697adced9a7a66d3b4d6d",
	                    "EndpointID": "42031938d6ef9cc43f0fc0138fd247aa6bb97cde2a505f66482d63702f270f66",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20210915034542-22140 -n no-preload-20210915034542-22140

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
helpers_test.go:240: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20210915034542-22140 -n no-preload-20210915034542-22140: (7.6499552s)
helpers_test.go:245: <<< TestStartStop/group/no-preload/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p no-preload-20210915034542-22140 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
helpers_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p no-preload-20210915034542-22140 logs -n 25: (14.34256s)
helpers_test.go:253: TestStartStop/group/no-preload/serial/SecondStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|-------------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                     Profile                     |          User           | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|-------------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	| delete  | -p                                                         | old-k8s-version-20210915033621-22140            | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 04:02:46 GMT | Wed, 15 Sep 2021 04:02:58 GMT |
	|         | old-k8s-version-20210915033621-22140                       |                                                 |                         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | no-preload-20210915034542-22140                 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 04:03:14 GMT | Wed, 15 Sep 2021 04:03:38 GMT |
	|         | no-preload-20210915034542-22140                            |                                                 |                         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |                         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |                         |         |                               |                               |
	| stop    | -p                                                         | no-preload-20210915034542-22140                 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 04:03:39 GMT | Wed, 15 Sep 2021 04:04:10 GMT |
	|         | no-preload-20210915034542-22140                            |                                                 |                         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                 |                         |         |                               |                               |
	| addons  | enable dashboard -p                                        | no-preload-20210915034542-22140                 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 04:04:12 GMT | Wed, 15 Sep 2021 04:04:13 GMT |
	|         | no-preload-20210915034542-22140                            |                                                 |                         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |                         |         |                               |                               |
	| start   | -p newest-cni-20210915040258-22140 --memory=2200           | newest-cni-20210915040258-22140                 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 04:02:59 GMT | Wed, 15 Sep 2021 04:09:00 GMT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |                         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |                         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                 |                         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |                         |         |                               |                               |
	|         | --driver=docker --kubernetes-version=v1.22.2-rc.0          |                                                 |                         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20210915040258-22140                 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 04:09:01 GMT | Wed, 15 Sep 2021 04:09:19 GMT |
	|         | newest-cni-20210915040258-22140                            |                                                 |                         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |                         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |                         |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20210915040258-22140                 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 04:09:20 GMT | Wed, 15 Sep 2021 04:09:51 GMT |
	|         | newest-cni-20210915040258-22140                            |                                                 |                         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                 |                         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20210915040258-22140                 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 04:09:55 GMT | Wed, 15 Sep 2021 04:09:55 GMT |
	|         | newest-cni-20210915040258-22140                            |                                                 |                         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |                         |         |                               |                               |
	| start   | -p                                                         | embed-certs-20210915034625-22140                | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 03:58:29 GMT | Wed, 15 Sep 2021 04:13:40 GMT |
	|         | embed-certs-20210915034625-22140                           |                                                 |                         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                 |                         |         |                               |                               |
	|         | --wait=true --embed-certs                                  |                                                 |                         |         |                               |                               |
	|         | --driver=docker                                            |                                                 |                         |         |                               |                               |
	|         | --kubernetes-version=v1.22.1                               |                                                 |                         |         |                               |                               |
	| start   | -p newest-cni-20210915040258-22140 --memory=2200           | newest-cni-20210915040258-22140                 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 04:09:56 GMT | Wed, 15 Sep 2021 04:13:43 GMT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |                         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |                         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                 |                         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |                         |         |                               |                               |
	|         | --driver=docker --kubernetes-version=v1.22.2-rc.0          |                                                 |                         |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20210915040258-22140                 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 04:13:52 GMT | Wed, 15 Sep 2021 04:13:58 GMT |
	|         | newest-cni-20210915040258-22140                            |                                                 |                         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                 |                         |         |                               |                               |
	| ssh     | -p                                                         | embed-certs-20210915034625-22140                | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 04:14:00 GMT | Wed, 15 Sep 2021 04:14:06 GMT |
	|         | embed-certs-20210915034625-22140                           |                                                 |                         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                 |                         |         |                               |                               |
	| pause   | -p                                                         | newest-cni-20210915040258-22140                 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 04:13:59 GMT | Wed, 15 Sep 2021 04:14:14 GMT |
	|         | newest-cni-20210915040258-22140                            |                                                 |                         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                 |                         |         |                               |                               |
	| pause   | -p                                                         | embed-certs-20210915034625-22140                | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 04:14:07 GMT | Wed, 15 Sep 2021 04:14:18 GMT |
	|         | embed-certs-20210915034625-22140                           |                                                 |                         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                 |                         |         |                               |                               |
	| start   | -p                                                         | default-k8s-different-port-20210915034637-22140 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 03:58:36 GMT | Wed, 15 Sep 2021 04:14:19 GMT |
	|         | default-k8s-different-port-20210915034637-22140            |                                                 |                         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                 |                         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=docker                      |                                                 |                         |         |                               |                               |
	|         | --kubernetes-version=v1.22.1                               |                                                 |                         |         |                               |                               |
	| unpause | -p                                                         | newest-cni-20210915040258-22140                 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 04:14:26 GMT | Wed, 15 Sep 2021 04:14:33 GMT |
	|         | newest-cni-20210915040258-22140                            |                                                 |                         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                 |                         |         |                               |                               |
	| unpause | -p                                                         | embed-certs-20210915034625-22140                | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 04:14:31 GMT | Wed, 15 Sep 2021 04:14:43 GMT |
	|         | embed-certs-20210915034625-22140                           |                                                 |                         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                 |                         |         |                               |                               |
	| ssh     | -p                                                         | default-k8s-different-port-20210915034637-22140 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 04:14:38 GMT | Wed, 15 Sep 2021 04:14:45 GMT |
	|         | default-k8s-different-port-20210915034637-22140            |                                                 |                         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                 |                         |         |                               |                               |
	| pause   | -p                                                         | default-k8s-different-port-20210915034637-22140 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 04:14:46 GMT | Wed, 15 Sep 2021 04:15:08 GMT |
	|         | default-k8s-different-port-20210915034637-22140            |                                                 |                         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                 |                         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20210915040258-22140                 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 04:14:47 GMT | Wed, 15 Sep 2021 04:15:24 GMT |
	|         | newest-cni-20210915040258-22140                            |                                                 |                         |         |                               |                               |
	| unpause | -p                                                         | default-k8s-different-port-20210915034637-22140 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 04:15:20 GMT | Wed, 15 Sep 2021 04:15:28 GMT |
	|         | default-k8s-different-port-20210915034637-22140            |                                                 |                         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                 |                         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20210915034625-22140                | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 04:15:03 GMT | Wed, 15 Sep 2021 04:15:31 GMT |
	|         | embed-certs-20210915034625-22140                           |                                                 |                         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20210915040258-22140                 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 04:15:25 GMT | Wed, 15 Sep 2021 04:15:35 GMT |
	|         | newest-cni-20210915040258-22140                            |                                                 |                         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20210915034625-22140                | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 04:15:32 GMT | Wed, 15 Sep 2021 04:15:41 GMT |
	|         | embed-certs-20210915034625-22140                           |                                                 |                         |         |                               |                               |
	| delete  | -p auto-20210915032655-22140                               | auto-20210915032655-22140                       | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 04:15:37 GMT | Wed, 15 Sep 2021 04:15:49 GMT |
	|---------|------------------------------------------------------------|-------------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/09/15 04:09:56
	Running on machine: windows-server-1
	Binary: Built with gc go1.17 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0915 04:09:56.425459   25180 out.go:298] Setting OutFile to fd 1444 ...
	I0915 04:09:56.426396   25180 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0915 04:09:56.426396   25180 out.go:311] Setting ErrFile to fd 1284...
	I0915 04:09:56.426396   25180 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0915 04:09:56.454736   25180 out.go:305] Setting JSON to false
	I0915 04:09:56.464869   25180 start.go:111] hostinfo: {"hostname":"windows-server-1","uptime":10281779,"bootTime":1621397217,"procs":158,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9879231f-6171-435d-bab4-5b366cc6391b"}
	W0915 04:09:56.465061   25180 start.go:119] gopshost.Virtualization returned error: not implemented yet
	I0915 04:09:56.469172   25180 out.go:177] * [newest-cni-20210915040258-22140] minikube v1.23.0 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	I0915 04:09:56.474667   25180 notify.go:169] Checking for updates...
	I0915 04:09:56.478853   25180 out.go:177]   - KUBECONFIG=C:\Users\jenkins\minikube-integration\kubeconfig
	I0915 04:09:56.481375   25180 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins\minikube-integration\.minikube
	I0915 04:09:53.973872   54608 addons.go:337] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0915 04:09:53.973872   54608 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0915 04:09:54.976988   54608 addons.go:337] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0915 04:09:54.976988   54608 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0915 04:09:55.432164   54608 pod_ready.go:102] pod "coredns-78fcd69978-bn7fz" in "kube-system" namespace has status "Ready":"False"
	I0915 04:09:56.963899   54608 addons.go:337] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0915 04:09:56.963899   54608 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0915 04:09:57.671401   54608 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0915 04:09:57.746530   54608 addons.go:337] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0915 04:09:57.746684   54608 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0915 04:09:57.929450   54608 pod_ready.go:102] pod "coredns-78fcd69978-bn7fz" in "kube-system" namespace has status "Ready":"False"
	I0915 04:09:56.247943   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:09:56.484857   25180 out.go:177]   - MINIKUBE_LOCATION=12425
	I0915 04:09:56.486369   25180 config.go:177] Loaded profile config "newest-cni-20210915040258-22140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.2-rc.0
	I0915 04:09:56.487815   25180 driver.go:343] Setting default libvirt URI to qemu:///system
	I0915 04:09:58.585915   25180 docker.go:132] docker version: linux-20.10.5
	I0915 04:09:58.586856   25180 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0915 04:09:59.889626   25180 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.302446s)
	I0915 04:09:59.891017   25180 info.go:263] docker info: {ID:6FWJ:GOIP:3UEJ:4EPV:BN5V:RES7:2PF6:QH5I:B3LP:YJP2:JQEM:LHWQ Containers:5 ContainersRunning:4 ContainersPaused:0 ContainersStopped:1 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:78 OomKillDisable:true NGoroutines:74 SystemTime:2021-09-15 04:09:59.3342175 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0915 04:09:59.896151   25180 out.go:177] * Using the docker driver based on existing profile
	I0915 04:09:59.896483   25180 start.go:278] selected driver: docker
	I0915 04:09:59.896638   25180 start.go:751] validating driver "docker" against &{Name:newest-cni-20210915040258-22140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.2-rc.0 ClusterName:newest-cni-20210915040258-22140 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.22.2-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:fa
lse node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0915 04:09:59.896942   25180 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0915 04:10:00.032832   25180 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0915 04:10:01.239826   25180 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.2069986s)
	I0915 04:10:01.239826   25180 info.go:263] docker info: {ID:6FWJ:GOIP:3UEJ:4EPV:BN5V:RES7:2PF6:QH5I:B3LP:YJP2:JQEM:LHWQ Containers:5 ContainersRunning:4 ContainersPaused:0 ContainersStopped:1 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:78 OomKillDisable:true NGoroutines:74 SystemTime:2021-09-15 04:10:00.7229556 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0915 04:10:01.240848   25180 start_flags.go:756] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0915 04:10:01.240848   25180 cni.go:93] Creating CNI manager for ""
	I0915 04:10:01.240848   25180 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0915 04:10:01.240848   25180 start_flags.go:278] config:
	{Name:newest-cni-20210915040258-22140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.2-rc.0 ClusterName:newest-cni-20210915040258-22140 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.22.2-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil>
ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0915 04:10:01.244831   25180 out.go:177] * Starting control plane node newest-cni-20210915040258-22140 in cluster newest-cni-20210915040258-22140
	I0915 04:10:01.244831   25180 cache.go:118] Beginning downloading kic base image for docker with docker
	I0915 04:09:57.857309   50332 ssh_runner.go:192] Completed: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (1.7196446s)
	I0915 04:09:58.146235   50332 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 04:10:01.246828   25180 out.go:177] * Pulling base image ...
	I0915 04:10:01.247828   25180 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 in local docker daemon
	I0915 04:10:01.247828   25180 preload.go:131] Checking if preload exists for k8s version v1.22.2-rc.0 and runtime docker
	I0915 04:10:01.247828   25180 preload.go:147] Found local preload: C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v12-v1.22.2-rc.0-docker-overlay2-amd64.tar.lz4
	I0915 04:10:01.247828   25180 cache.go:57] Caching tarball of preloaded images
	I0915 04:10:01.248874   25180 preload.go:173] Found C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v12-v1.22.2-rc.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0915 04:10:01.248874   25180 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.2-rc.0 on docker
	I0915 04:10:01.249813   25180 profile.go:148] Saving config to C:\Users\jenkins\minikube-integration\.minikube\profiles\newest-cni-20210915040258-22140\config.json ...
	I0915 04:10:02.041670   25180 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 in local docker daemon, skipping pull
	I0915 04:10:02.041670   25180 cache.go:140] gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 exists in daemon, skipping load
	I0915 04:10:02.042402   25180 cache.go:206] Successfully downloaded all kic artifacts
	I0915 04:10:02.042565   25180 start.go:313] acquiring machines lock for newest-cni-20210915040258-22140: {Name:mk85b506a66a2d2993867b65631fa6c795391237 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 04:10:02.043038   25180 start.go:317] acquired machines lock for "newest-cni-20210915040258-22140" in 289.9µs
	I0915 04:10:02.043461   25180 start.go:93] Skipping create...Using existing machine configuration
	I0915 04:10:02.043461   25180 fix.go:55] fixHost starting: 
	I0915 04:10:02.068270   25180 cli_runner.go:115] Run: docker container inspect newest-cni-20210915040258-22140 --format={{.State.Status}}
	I0915 04:10:02.852913   25180 fix.go:108] recreateIfNeeded on newest-cni-20210915040258-22140: state=Stopped err=<nil>
	W0915 04:10:02.853133   25180 fix.go:134] unexpected machine state, will restart: <nil>
	I0915 04:09:59.924094   54608 addons.go:337] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0915 04:09:59.924488   54608 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0915 04:10:00.409527   54608 pod_ready.go:102] pod "coredns-78fcd69978-bn7fz" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:01.421353   54608 addons.go:337] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0915 04:10:01.421353   54608 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0915 04:10:02.486273   54608 pod_ready.go:102] pod "coredns-78fcd69978-bn7fz" in "kube-system" namespace has status "Ready":"False"
	I0915 04:09:59.070035   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:01.167010   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:03.173358   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:02.857174   25180 out.go:177] * Restarting existing docker container for "newest-cni-20210915040258-22140" ...
	I0915 04:10:02.878274   25180 cli_runner.go:115] Run: docker start newest-cni-20210915040258-22140
	I0915 04:10:01.848234   50332 ssh_runner.go:192] Completed: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (3.7020121s)
	I0915 04:10:02.146034   50332 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 04:10:04.874334   50332 ssh_runner.go:192] Completed: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (2.7283095s)
	I0915 04:10:05.126378   50332 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 04:10:04.363132   54608 addons.go:337] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0915 04:10:04.363401   54608 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0915 04:10:04.921611   54608 pod_ready.go:102] pod "coredns-78fcd69978-bn7fz" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:07.098076   54608 addons.go:337] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0915 04:10:07.098206   54608 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0915 04:10:07.303420   54608 pod_ready.go:102] pod "coredns-78fcd69978-bn7fz" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:05.215826   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:07.222806   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:06.748565   25180 cli_runner.go:168] Completed: docker start newest-cni-20210915040258-22140: (3.8701261s)
	I0915 04:10:06.762154   25180 cli_runner.go:115] Run: docker container inspect newest-cni-20210915040258-22140 --format={{.State.Status}}
	I0915 04:10:07.561768   25180 kic.go:420] container "newest-cni-20210915040258-22140" state is running.
	I0915 04:10:07.579723   25180 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20210915040258-22140
	I0915 04:10:08.436388   25180 profile.go:148] Saving config to C:\Users\jenkins\minikube-integration\.minikube\profiles\newest-cni-20210915040258-22140\config.json ...
	I0915 04:10:08.440318   25180 machine.go:88] provisioning docker machine ...
	I0915 04:10:08.440318   25180 ubuntu.go:169] provisioning hostname "newest-cni-20210915040258-22140"
	I0915 04:10:08.449317   25180 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210915040258-22140
	I0915 04:10:09.204723   25180 main.go:130] libmachine: Using SSH client type: native
	I0915 04:10:09.205504   25180 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x158acc0] 0x158db80 <nil>  [] 0s} 127.0.0.1 59571 <nil> <nil>}
	I0915 04:10:09.205504   25180 main.go:130] libmachine: About to run SSH command:
	sudo hostname newest-cni-20210915040258-22140 && echo "newest-cni-20210915040258-22140" | sudo tee /etc/hostname
	I0915 04:10:09.222958   25180 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0915 04:10:08.535249   50332 ssh_runner.go:192] Completed: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (3.4088836s)
	I0915 04:10:08.638634   50332 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 04:10:09.493489   54608 pod_ready.go:102] pod "coredns-78fcd69978-bn7fz" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:09.668187   54608 ssh_runner.go:192] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.22.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (22.1754029s)
	I0915 04:10:09.668187   54608 start.go:729] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0915 04:10:09.668187   54608 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (22.1980224s)
	I0915 04:10:11.553675   54608 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0915 04:10:11.907626   54608 pod_ready.go:102] pod "coredns-78fcd69978-bn7fz" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:10.433781   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:12.775722   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:12.871032   25180 main.go:130] libmachine: SSH cmd err, output: <nil>: newest-cni-20210915040258-22140
	
	I0915 04:10:12.887563   25180 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210915040258-22140
	I0915 04:10:13.693036   25180 main.go:130] libmachine: Using SSH client type: native
	I0915 04:10:13.693784   25180 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x158acc0] 0x158db80 <nil>  [] 0s} 127.0.0.1 59571 <nil> <nil>}
	I0915 04:10:13.693784   25180 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20210915040258-22140' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20210915040258-22140/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20210915040258-22140' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0915 04:10:14.239470   25180 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0915 04:10:14.239751   25180 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins\minikube-integration\.minikube CaCertPath:C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins\minikube-integration\.minikube}
	I0915 04:10:14.239978   25180 ubuntu.go:177] setting up certificates
	I0915 04:10:14.239978   25180 provision.go:83] configureAuth start
	I0915 04:10:14.256244   25180 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20210915040258-22140
	I0915 04:10:14.993944   25180 provision.go:138] copyHostCerts
	I0915 04:10:14.995004   25180 exec_runner.go:145] found C:\Users\jenkins\minikube-integration\.minikube/cert.pem, removing ...
	I0915 04:10:14.995233   25180 exec_runner.go:208] rm: C:\Users\jenkins\minikube-integration\.minikube\cert.pem
	I0915 04:10:14.996101   25180 exec_runner.go:152] cp: C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0915 04:10:15.000067   25180 exec_runner.go:145] found C:\Users\jenkins\minikube-integration\.minikube/key.pem, removing ...
	I0915 04:10:15.000067   25180 exec_runner.go:208] rm: C:\Users\jenkins\minikube-integration\.minikube\key.pem
	I0915 04:10:15.000726   25180 exec_runner.go:152] cp: C:\Users\jenkins\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins\minikube-integration\.minikube/key.pem (1679 bytes)
	I0915 04:10:15.003374   25180 exec_runner.go:145] found C:\Users\jenkins\minikube-integration\.minikube/ca.pem, removing ...
	I0915 04:10:15.003578   25180 exec_runner.go:208] rm: C:\Users\jenkins\minikube-integration\.minikube\ca.pem
	I0915 04:10:15.004264   25180 exec_runner.go:152] cp: C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0915 04:10:15.006203   25180 provision.go:112] generating server cert: C:\Users\jenkins\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.newest-cni-20210915040258-22140 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20210915040258-22140]
	I0915 04:10:15.287860   25180 provision.go:172] copyRemoteCerts
	I0915 04:10:15.293672   25180 ssh_runner.go:152] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0915 04:10:15.311085   25180 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210915040258-22140
	I0915 04:10:13.215905   50332 ssh_runner.go:192] Completed: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (4.5772883s)
	I0915 04:10:13.648430   50332 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 04:10:15.346056   50332 ssh_runner.go:192] Completed: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (1.6976318s)
	I0915 04:10:15.645525   50332 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 04:10:13.927541   54608 pod_ready.go:102] pod "coredns-78fcd69978-bn7fz" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:14.890800   54608 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (27.3131911s)
	I0915 04:10:15.932616   54608 pod_ready.go:102] pod "coredns-78fcd69978-bn7fz" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:18.403179   54608 pod_ready.go:102] pod "coredns-78fcd69978-bn7fz" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:15.191772   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:17.282052   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:16.129443   25180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59571 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\newest-cni-20210915040258-22140\id_rsa Username:docker}
	I0915 04:10:16.499962   25180 ssh_runner.go:192] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.206294s)
	I0915 04:10:16.501127   25180 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0915 04:10:16.715267   25180 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1265 bytes)
	I0915 04:10:16.898578   25180 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0915 04:10:17.174510   25180 provision.go:86] duration metric: configureAuth took 2.9345436s
	I0915 04:10:17.174702   25180 ubuntu.go:193] setting minikube options for container-runtime
	I0915 04:10:17.175628   25180 config.go:177] Loaded profile config "newest-cni-20210915040258-22140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.2-rc.0
	I0915 04:10:17.198964   25180 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210915040258-22140
	I0915 04:10:17.990529   25180 main.go:130] libmachine: Using SSH client type: native
	I0915 04:10:17.991070   25180 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x158acc0] 0x158db80 <nil>  [] 0s} 127.0.0.1 59571 <nil> <nil>}
	I0915 04:10:17.991227   25180 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0915 04:10:18.552490   25180 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0915 04:10:18.552490   25180 ubuntu.go:71] root file system type: overlay
	I0915 04:10:18.553394   25180 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0915 04:10:18.562967   25180 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210915040258-22140
	I0915 04:10:19.417392   25180 main.go:130] libmachine: Using SSH client type: native
	I0915 04:10:19.418411   25180 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x158acc0] 0x158db80 <nil>  [] 0s} 127.0.0.1 59571 <nil> <nil>}
	I0915 04:10:19.418411   25180 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0915 04:10:20.142189   25180 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0915 04:10:20.168853   25180 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210915040258-22140
	I0915 04:10:20.867951   25180 main.go:130] libmachine: Using SSH client type: native
	I0915 04:10:20.868372   25180 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x158acc0] 0x158db80 <nil>  [] 0s} 127.0.0.1 59571 <nil> <nil>}
	I0915 04:10:20.868552   25180 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0915 04:10:18.312510   50332 ssh_runner.go:192] Completed: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (2.6669943s)
	I0915 04:10:18.657435   50332 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 04:10:20.103611   50332 ssh_runner.go:192] Completed: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (1.4461816s)
	I0915 04:10:20.149983   50332 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 04:10:20.936274   54608 pod_ready.go:102] pod "coredns-78fcd69978-bn7fz" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:23.261904   54608 pod_ready.go:102] pod "coredns-78fcd69978-bn7fz" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:19.689748   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:21.698902   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:23.752984   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:21.498365   25180 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0915 04:10:21.498734   25180 machine.go:91] provisioned docker machine in 13.0584636s
	I0915 04:10:21.498734   25180 start.go:267] post-start starting for "newest-cni-20210915040258-22140" (driver="docker")
	I0915 04:10:21.498923   25180 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0915 04:10:21.510931   25180 ssh_runner.go:152] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0915 04:10:21.523124   25180 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210915040258-22140
	I0915 04:10:22.324209   25180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59571 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\newest-cni-20210915040258-22140\id_rsa Username:docker}
	I0915 04:10:22.683319   25180 ssh_runner.go:192] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.1723924s)
	I0915 04:10:22.704491   25180 ssh_runner.go:152] Run: cat /etc/os-release
	I0915 04:10:22.769487   25180 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0915 04:10:22.769487   25180 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0915 04:10:22.769487   25180 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0915 04:10:22.769487   25180 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0915 04:10:22.769487   25180 filesync.go:126] Scanning C:\Users\jenkins\minikube-integration\.minikube\addons for local assets ...
	I0915 04:10:22.769487   25180 filesync.go:126] Scanning C:\Users\jenkins\minikube-integration\.minikube\files for local assets ...
	I0915 04:10:22.769487   25180 filesync.go:149] local asset: C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\221402.pem -> 221402.pem in /etc/ssl/certs
	I0915 04:10:22.791238   25180 ssh_runner.go:152] Run: sudo mkdir -p /etc/ssl/certs
	I0915 04:10:22.840891   25180 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\221402.pem --> /etc/ssl/certs/221402.pem (1708 bytes)
	I0915 04:10:23.044558   25180 start.go:270] post-start completed in 1.545641s
	I0915 04:10:23.062698   25180 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 04:10:23.073555   25180 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210915040258-22140
	I0915 04:10:23.870683   25180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59571 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\newest-cni-20210915040258-22140\id_rsa Username:docker}
	I0915 04:10:24.186225   25180 ssh_runner.go:192] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.1231294s)
	I0915 04:10:24.186225   25180 fix.go:57] fixHost completed within 22.1428467s
	I0915 04:10:24.186225   25180 start.go:80] releasing machines lock for "newest-cni-20210915040258-22140", held for 22.143269s
	I0915 04:10:24.200350   25180 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20210915040258-22140
	I0915 04:10:25.074301   25180 ssh_runner.go:152] Run: systemctl --version
	I0915 04:10:25.075288   25180 ssh_runner.go:152] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0915 04:10:25.089318   25180 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210915040258-22140
	I0915 04:10:25.093242   25180 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210915040258-22140
	I0915 04:10:25.926746   25180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59571 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\newest-cni-20210915040258-22140\id_rsa Username:docker}
	I0915 04:10:25.929523   25180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59571 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\newest-cni-20210915040258-22140\id_rsa Username:docker}
	I0915 04:10:22.174699   50332 ssh_runner.go:192] Completed: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (2.0247239s)
	I0915 04:10:22.643264   50332 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 04:10:25.130348   54608 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (27.4589519s)
	I0915 04:10:25.130348   54608 addons.go:375] Verifying addon metrics-server=true in "embed-certs-20210915034625-22140"
	I0915 04:10:25.457964   54608 pod_ready.go:102] pod "coredns-78fcd69978-bn7fz" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:27.824802   54608 pod_ready.go:102] pod "coredns-78fcd69978-bn7fz" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:25.801768   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:28.347110   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:30.169261   54608 pod_ready.go:102] pod "coredns-78fcd69978-bn7fz" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:30.618628   54608 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (19.0650238s)
	I0915 04:10:26.587068   25180 ssh_runner.go:192] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.5117854s)
	I0915 04:10:26.587380   25180 ssh_runner.go:192] Completed: systemctl --version: (1.5127721s)
	I0915 04:10:26.612230   25180 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service containerd
	I0915 04:10:26.748993   25180 ssh_runner.go:152] Run: sudo systemctl cat docker.service
	I0915 04:10:26.875404   25180 cruntime.go:255] skipping containerd shutdown because we are bound to it
	I0915 04:10:26.891235   25180 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service crio
	I0915 04:10:27.001043   25180 ssh_runner.go:152] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0915 04:10:27.133297   25180 ssh_runner.go:152] Run: sudo systemctl unmask docker.service
	I0915 04:10:27.908519   25180 ssh_runner.go:152] Run: sudo systemctl enable docker.socket
	I0915 04:10:28.540249   25180 ssh_runner.go:152] Run: sudo systemctl cat docker.service
	I0915 04:10:28.718706   25180 ssh_runner.go:152] Run: sudo systemctl daemon-reload
	I0915 04:10:29.389178   25180 ssh_runner.go:152] Run: sudo systemctl start docker
	I0915 04:10:29.486555   25180 ssh_runner.go:152] Run: docker version --format {{.Server.Version}}
	I0915 04:10:30.042555   25180 ssh_runner.go:152] Run: docker version --format {{.Server.Version}}
	I0915 04:10:30.707695   25180 out.go:204] * Preparing Kubernetes v1.22.2-rc.0 on Docker 20.10.8 ...
	I0915 04:10:30.737179   25180 cli_runner.go:115] Run: docker exec -t newest-cni-20210915040258-22140 dig +short host.docker.internal
	I0915 04:10:26.686056   50332 ssh_runner.go:192] Completed: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (4.0428072s)
	I0915 04:10:27.144216   50332 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 04:10:32.281855   25180 cli_runner.go:168] Completed: docker exec -t newest-cni-20210915040258-22140 dig +short host.docker.internal: (1.5446812s)
	I0915 04:10:32.281855   25180 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0915 04:10:32.318824   25180 ssh_runner.go:152] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0915 04:10:32.385422   25180 ssh_runner.go:152] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 04:10:32.617300   25180 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20210915040258-22140
	I0915 04:10:33.408603   25180 out.go:177]   - kubelet.network-plugin=cni
	I0915 04:10:30.624844   54608 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0915 04:10:30.624844   54608 addons.go:406] enableAddons completed in 48.2375203s
	I0915 04:10:32.432660   54608 pod_ready.go:102] pod "coredns-78fcd69978-bn7fz" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:30.941444   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:33.292333   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:33.412878   25180 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0915 04:10:33.414224   25180 preload.go:131] Checking if preload exists for k8s version v1.22.2-rc.0 and runtime docker
	I0915 04:10:33.443786   25180 ssh_runner.go:152] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0915 04:10:33.989943   25180 docker.go:558] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.22.2-rc.0
	k8s.gcr.io/kube-proxy:v1.22.2-rc.0
	k8s.gcr.io/kube-scheduler:v1.22.2-rc.0
	k8s.gcr.io/kube-controller-manager:v1.22.2-rc.0
	k8s.gcr.io/etcd:3.5.0-0
	k8s.gcr.io/coredns/coredns:v1.8.4
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.5
	kubernetesui/dashboard:v2.1.0
	kubernetesui/metrics-scraper:v1.0.4
	
	-- /stdout --
	I0915 04:10:33.989943   25180 docker.go:489] Images already preloaded, skipping extraction
	I0915 04:10:33.998951   25180 ssh_runner.go:152] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0915 04:10:34.445243   25180 docker.go:558] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.22.2-rc.0
	k8s.gcr.io/kube-scheduler:v1.22.2-rc.0
	k8s.gcr.io/kube-proxy:v1.22.2-rc.0
	k8s.gcr.io/kube-controller-manager:v1.22.2-rc.0
	k8s.gcr.io/etcd:3.5.0-0
	k8s.gcr.io/coredns/coredns:v1.8.4
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.5
	kubernetesui/dashboard:v2.1.0
	kubernetesui/metrics-scraper:v1.0.4
	
	-- /stdout --
	I0915 04:10:34.445243   25180 cache_images.go:78] Images are preloaded, skipping loading
	I0915 04:10:34.461381   25180 ssh_runner.go:152] Run: docker info --format {{.CgroupDriver}}
	I0915 04:10:35.593321   25180 ssh_runner.go:192] Completed: docker info --format {{.CgroupDriver}}: (1.1319441s)
	I0915 04:10:35.593321   25180 cni.go:93] Creating CNI manager for ""
	I0915 04:10:35.593321   25180 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0915 04:10:35.593321   25180 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0915 04:10:35.594054   25180 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.22.2-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20210915040258-22140 NodeName:newest-cni-20210915040258-22140 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect
:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0915 04:10:35.594325   25180 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "newest-cni-20210915040258-22140"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.22.2-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0915 04:10:35.595326   25180 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.22.2-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20210915040258-22140 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.22.2-rc.0 ClusterName:newest-cni-20210915040258-22140 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0915 04:10:35.625104   25180 ssh_runner.go:152] Run: sudo ls /var/lib/minikube/binaries/v1.22.2-rc.0
	I0915 04:10:35.694748   25180 binaries.go:44] Found k8s binaries, skipping transfer
	I0915 04:10:35.709810   25180 ssh_runner.go:152] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0915 04:10:35.777365   25180 ssh_runner.go:319] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (420 bytes)
	I0915 04:10:35.945321   25180 ssh_runner.go:319] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0915 04:10:31.663908   50332 ssh_runner.go:192] Completed: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (4.5194292s)
	I0915 04:10:31.687247   50332 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 04:10:35.413253   54608 pod_ready.go:102] pod "coredns-78fcd69978-bn7fz" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:37.597574   54608 pod_ready.go:102] pod "coredns-78fcd69978-bn7fz" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:35.735469   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:38.173171   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:38.144766   50332 ssh_runner.go:192] Completed: sudo /var/lib/minikube/binaries/v1.22.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (6.4564444s)
	I0915 04:10:38.145528   50332 kubeadm.go:985] duration metric: took 1m17.0177701s to wait for elevateKubeSystemPrivileges.
	I0915 04:10:38.147244   50332 kubeadm.go:392] StartCluster complete in 11m11.92261s
	I0915 04:10:38.147244   50332 settings.go:142] acquiring lock: {Name:mk81656fcf8bcddd49caaa1adb1c177165a02100 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 04:10:38.149389   50332 settings.go:150] Updating kubeconfig:  C:\Users\jenkins\minikube-integration\kubeconfig
	I0915 04:10:38.155790   50332 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\kubeconfig: {Name:mk312e0248780fd448f3a83862df8ee597f47373 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 04:10:39.276211   50332 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20210915034637-22140" rescaled to 1
	I0915 04:10:39.276211   50332 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.22.1 ControlPlane:true Worker:true}
	I0915 04:10:39.276211   50332 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0915 04:10:39.286695   50332 out.go:177] * Verifying Kubernetes components...
	I0915 04:10:39.276211   50332 addons.go:404] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0915 04:10:39.276211   50332 config.go:177] Loaded profile config "default-k8s-different-port-20210915034637-22140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.1
	I0915 04:10:39.287614   50332 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20210915034637-22140"
	I0915 04:10:39.287614   50332 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20210915034637-22140"
	I0915 04:10:39.287837   50332 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20210915034637-22140"
	I0915 04:10:39.287837   50332 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20210915034637-22140"
	I0915 04:10:39.287837   50332 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20210915034637-22140"
	W0915 04:10:39.288041   50332 addons.go:165] addon metrics-server should already be in state true
	I0915 04:10:39.288041   50332 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20210915034637-22140"
	I0915 04:10:39.288041   50332 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20210915034637-22140"
	W0915 04:10:39.288224   50332 addons.go:165] addon dashboard should already be in state true
	W0915 04:10:39.287837   50332 addons.go:165] addon storage-provisioner should already be in state true
	I0915 04:10:39.288224   50332 host.go:66] Checking if "default-k8s-different-port-20210915034637-22140" exists ...
	I0915 04:10:39.288435   50332 host.go:66] Checking if "default-k8s-different-port-20210915034637-22140" exists ...
	I0915 04:10:39.287837   50332 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20210915034637-22140"
	I0915 04:10:39.288224   50332 host.go:66] Checking if "default-k8s-different-port-20210915034637-22140" exists ...
	I0915 04:10:39.320371   50332 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
	I0915 04:10:39.333232   50332 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210915034637-22140 --format={{.State.Status}}
	I0915 04:10:39.339819   50332 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210915034637-22140 --format={{.State.Status}}
	I0915 04:10:39.339819   50332 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210915034637-22140 --format={{.State.Status}}
	I0915 04:10:39.393282   50332 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210915034637-22140 --format={{.State.Status}}
	I0915 04:10:40.378912   50332 cli_runner.go:168] Completed: docker container inspect default-k8s-different-port-20210915034637-22140 --format={{.State.Status}}: (1.0390974s)
	I0915 04:10:40.383032   50332 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0915 04:10:40.383750   50332 addons.go:337] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 04:10:40.383884   50332 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0915 04:10:40.417428   50332 cli_runner.go:168] Completed: docker container inspect default-k8s-different-port-20210915034637-22140 --format={{.State.Status}}: (1.077355s)
	I0915 04:10:40.417599   50332 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210915034637-22140
	I0915 04:10:40.421598   50332 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0915 04:10:40.421957   50332 addons.go:337] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0915 04:10:40.421957   50332 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0915 04:10:40.431743   50332 cli_runner.go:168] Completed: docker container inspect default-k8s-different-port-20210915034637-22140 --format={{.State.Status}}: (1.0985149s)
	I0915 04:10:40.437983   50332 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210915034637-22140
	I0915 04:10:40.473165   50332 cli_runner.go:168] Completed: docker container inspect default-k8s-different-port-20210915034637-22140 --format={{.State.Status}}: (1.079582s)
	I0915 04:10:40.478336   50332 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0915 04:10:36.170270   25180 ssh_runner.go:319] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I0915 04:10:36.335820   25180 ssh_runner.go:152] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0915 04:10:36.368202   25180 ssh_runner.go:152] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 04:10:36.539036   25180 certs.go:52] Setting up C:\Users\jenkins\minikube-integration\.minikube\profiles\newest-cni-20210915040258-22140 for IP: 192.168.67.2
	I0915 04:10:36.540076   25180 certs.go:179] skipping minikubeCA CA generation: C:\Users\jenkins\minikube-integration\.minikube\ca.key
	I0915 04:10:36.540700   25180 certs.go:179] skipping proxyClientCA CA generation: C:\Users\jenkins\minikube-integration\.minikube\proxy-client-ca.key
	I0915 04:10:36.541017   25180 certs.go:293] skipping minikube-user signed cert generation: C:\Users\jenkins\minikube-integration\.minikube\profiles\newest-cni-20210915040258-22140\client.key
	I0915 04:10:36.547420   25180 certs.go:293] skipping minikube signed cert generation: C:\Users\jenkins\minikube-integration\.minikube\profiles\newest-cni-20210915040258-22140\apiserver.key.c7fa3a9e
	I0915 04:10:36.547882   25180 certs.go:293] skipping aggregator signed cert generation: C:\Users\jenkins\minikube-integration\.minikube\profiles\newest-cni-20210915040258-22140\proxy-client.key
	I0915 04:10:36.549243   25180 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\22140.pem (1338 bytes)
	W0915 04:10:36.549755   25180 certs.go:372] ignoring C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\22140_empty.pem, impossibly tiny 0 bytes
	I0915 04:10:36.550113   25180 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0915 04:10:36.550841   25180 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0915 04:10:36.551255   25180 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0915 04:10:36.551581   25180 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0915 04:10:36.552249   25180 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\221402.pem (1708 bytes)
	I0915 04:10:36.564328   25180 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\newest-cni-20210915040258-22140\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0915 04:10:36.784928   25180 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\newest-cni-20210915040258-22140\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0915 04:10:36.910374   25180 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\newest-cni-20210915040258-22140\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0915 04:10:37.082831   25180 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\newest-cni-20210915040258-22140\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0915 04:10:37.320095   25180 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0915 04:10:37.515632   25180 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0915 04:10:37.678966   25180 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0915 04:10:37.836260   25180 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I0915 04:10:38.104391   25180 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\221402.pem --> /usr/share/ca-certificates/221402.pem (1708 bytes)
	I0915 04:10:38.378135   25180 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0915 04:10:38.603495   25180 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\certs\22140.pem --> /usr/share/ca-certificates/22140.pem (1338 bytes)
	I0915 04:10:38.909020   25180 ssh_runner.go:319] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0915 04:10:39.186988   25180 ssh_runner.go:152] Run: openssl version
	I0915 04:10:39.257132   25180 ssh_runner.go:152] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22140.pem && ln -fs /usr/share/ca-certificates/22140.pem /etc/ssl/certs/22140.pem"
	I0915 04:10:39.401575   25180 ssh_runner.go:152] Run: ls -la /usr/share/ca-certificates/22140.pem
	I0915 04:10:39.495856   25180 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Sep 15 01:56 /usr/share/ca-certificates/22140.pem
	I0915 04:10:39.511859   25180 ssh_runner.go:152] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22140.pem
	I0915 04:10:39.585720   25180 ssh_runner.go:152] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22140.pem /etc/ssl/certs/51391683.0"
	I0915 04:10:39.699939   25180 ssh_runner.go:152] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/221402.pem && ln -fs /usr/share/ca-certificates/221402.pem /etc/ssl/certs/221402.pem"
	I0915 04:10:39.858631   25180 ssh_runner.go:152] Run: ls -la /usr/share/ca-certificates/221402.pem
	I0915 04:10:39.921226   25180 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Sep 15 01:56 /usr/share/ca-certificates/221402.pem
	I0915 04:10:39.951853   25180 ssh_runner.go:152] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/221402.pem
	I0915 04:10:40.064845   25180 ssh_runner.go:152] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/221402.pem /etc/ssl/certs/3ec20f2e.0"
	I0915 04:10:40.199049   25180 ssh_runner.go:152] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0915 04:10:40.425317   25180 ssh_runner.go:152] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0915 04:10:40.564685   25180 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Sep 15 01:33 /usr/share/ca-certificates/minikubeCA.pem
	I0915 04:10:40.583669   25180 ssh_runner.go:152] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0915 04:10:40.657684   25180 ssh_runner.go:152] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0915 04:10:40.727878   25180 kubeadm.go:390] StartCluster: {Name:newest-cni-20210915040258-22140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.2-rc.0 ClusterName:newest-cni-20210915040258-22140 Namespace:default APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.22.2-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false
system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0915 04:10:40.742458   25180 ssh_runner.go:152] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0915 04:10:40.481515   50332 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0915 04:10:40.481631   50332 addons.go:337] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0915 04:10:40.481631   50332 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0915 04:10:40.507953   50332 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210915034637-22140
	I0915 04:10:40.802930   50332 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20210915034637-22140"
	W0915 04:10:40.802930   50332 addons.go:165] addon default-storageclass should already be in state true
	I0915 04:10:40.803259   50332 host.go:66] Checking if "default-k8s-different-port-20210915034637-22140" exists ...
	I0915 04:10:40.842318   50332 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210915034637-22140 --format={{.State.Status}}
	I0915 04:10:39.957738   54608 pod_ready.go:102] pod "coredns-78fcd69978-bn7fz" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:42.086849   54608 pod_ready.go:102] pod "coredns-78fcd69978-bn7fz" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:40.323157   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:42.689617   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:41.551799   25180 ssh_runner.go:152] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0915 04:10:41.658927   25180 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0915 04:10:41.659405   25180 kubeadm.go:600] restartCluster start
	I0915 04:10:41.666585   25180 ssh_runner.go:152] Run: sudo test -d /data/minikube
	I0915 04:10:41.783358   25180 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0915 04:10:41.808519   25180 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20210915040258-22140
	I0915 04:10:42.714719   25180 kubeconfig.go:117] verify returned: extract IP: "newest-cni-20210915040258-22140" does not appear in C:\Users\jenkins\minikube-integration\kubeconfig
	I0915 04:10:42.719506   25180 kubeconfig.go:128] "newest-cni-20210915040258-22140" context is missing from C:\Users\jenkins\minikube-integration\kubeconfig - will repair!
	I0915 04:10:42.736081   25180 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\kubeconfig: {Name:mk312e0248780fd448f3a83862df8ee597f47373 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 04:10:42.786113   25180 ssh_runner.go:152] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0915 04:10:42.852878   25180 api_server.go:164] Checking apiserver status ...
	I0915 04:10:42.870239   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0915 04:10:43.020092   25180 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0915 04:10:43.220516   25180 api_server.go:164] Checking apiserver status ...
	I0915 04:10:43.242623   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0915 04:10:43.382568   25180 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0915 04:10:43.420733   25180 api_server.go:164] Checking apiserver status ...
	I0915 04:10:43.441839   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0915 04:10:43.543660   25180 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0915 04:10:43.624560   25180 api_server.go:164] Checking apiserver status ...
	I0915 04:10:43.645625   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0915 04:10:43.819632   25180 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0915 04:10:43.820606   25180 api_server.go:164] Checking apiserver status ...
	I0915 04:10:43.836328   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0915 04:10:44.004266   25180 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0915 04:10:44.021992   25180 api_server.go:164] Checking apiserver status ...
	I0915 04:10:44.050851   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0915 04:10:44.179769   25180 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0915 04:10:44.221212   25180 api_server.go:164] Checking apiserver status ...
	I0915 04:10:44.239009   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0915 04:10:44.352769   25180 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0915 04:10:44.423973   25180 api_server.go:164] Checking apiserver status ...
	I0915 04:10:44.441623   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0915 04:10:44.573649   25180 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0915 04:10:44.621078   25180 api_server.go:164] Checking apiserver status ...
	I0915 04:10:44.634765   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0915 04:10:44.744103   25180 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0915 04:10:44.821081   25180 api_server.go:164] Checking apiserver status ...
	I0915 04:10:44.836425   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0915 04:10:44.977529   25180 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0915 04:10:45.020533   25180 api_server.go:164] Checking apiserver status ...
	I0915 04:10:45.032985   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0915 04:10:45.164653   25180 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0915 04:10:45.220272   25180 api_server.go:164] Checking apiserver status ...
	I0915 04:10:45.242404   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0915 04:10:45.358478   25180 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0915 04:10:45.421092   25180 api_server.go:164] Checking apiserver status ...
	I0915 04:10:45.446148   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0915 04:10:45.580305   25180 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0915 04:10:45.620590   25180 api_server.go:164] Checking apiserver status ...
	I0915 04:10:45.633492   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0915 04:10:46.006647   25180 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0915 04:10:46.020450   25180 api_server.go:164] Checking apiserver status ...
	I0915 04:10:41.430112   50332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59345 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\default-k8s-different-port-20210915034637-22140\id_rsa Username:docker}
	I0915 04:10:41.446935   50332 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210915034637-22140: (1.0085828s)
	I0915 04:10:41.447103   50332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59345 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\default-k8s-different-port-20210915034637-22140\id_rsa Username:docker}
	I0915 04:10:41.469203   50332 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210915034637-22140: (1.0510586s)
	I0915 04:10:41.469428   50332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59345 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\default-k8s-different-port-20210915034637-22140\id_rsa Username:docker}
	I0915 04:10:41.877499   50332 cli_runner.go:168] Completed: docker container inspect default-k8s-different-port-20210915034637-22140 --format={{.State.Status}}: (1.0351845s)
	I0915 04:10:41.877499   50332 addons.go:337] installing /etc/kubernetes/addons/storageclass.yaml
	I0915 04:10:41.877499   50332 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0915 04:10:41.882563   50332 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210915034637-22140
	I0915 04:10:42.793130   50332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59345 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\default-k8s-different-port-20210915034637-22140\id_rsa Username:docker}
	I0915 04:10:44.372861   50332 addons.go:337] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0915 04:10:44.372968   50332 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0915 04:10:44.558976   50332 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 04:10:45.117906   50332 addons.go:337] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0915 04:10:45.117906   50332 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0915 04:10:45.429038   50332 addons.go:337] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0915 04:10:45.429253   50332 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0915 04:10:44.465767   54608 pod_ready.go:102] pod "coredns-78fcd69978-bn7fz" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:46.926899   54608 pod_ready.go:102] pod "coredns-78fcd69978-bn7fz" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:44.749218   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:46.795400   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:46.038582   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0915 04:10:46.122503   25180 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0915 04:10:46.122503   25180 api_server.go:164] Checking apiserver status ...
	I0915 04:10:46.141857   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0915 04:10:46.259677   25180 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0915 04:10:46.259896   25180 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0915 04:10:46.259896   25180 kubeadm.go:1032] stopping kube-system containers ...
	I0915 04:10:46.276971   25180 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0915 04:10:46.666701   25180 docker.go:390] Stopping containers: [6398dc9604df 1b98c0afd3ac 9462b282d5a0 ff3370ddb570 0fb07f6c15bf 1cef834158eb 979f5078dcfa c7c36e814c5e 78722d3e6571 587a04955cca d8fcce56171d 9153b7b09912 fccf2427902b 942a284a7664 3f02862f158d 6f1997c6a519]
	I0915 04:10:46.687181   25180 ssh_runner.go:152] Run: docker stop 6398dc9604df 1b98c0afd3ac 9462b282d5a0 ff3370ddb570 0fb07f6c15bf 1cef834158eb 979f5078dcfa c7c36e814c5e 78722d3e6571 587a04955cca d8fcce56171d 9153b7b09912 fccf2427902b 942a284a7664 3f02862f158d 6f1997c6a519
	I0915 04:10:47.190705   25180 ssh_runner.go:152] Run: sudo systemctl stop kubelet
	I0915 04:10:47.394049   25180 ssh_runner.go:152] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0915 04:10:47.518586   25180 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5643 Sep 15 04:07 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Sep 15 04:07 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 Sep 15 04:07 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Sep 15 04:07 /etc/kubernetes/scheduler.conf
	
	I0915 04:10:47.532896   25180 ssh_runner.go:152] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0915 04:10:47.644002   25180 ssh_runner.go:152] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0915 04:10:47.834032   25180 ssh_runner.go:152] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0915 04:10:47.916075   25180 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0915 04:10:47.922851   25180 ssh_runner.go:152] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0915 04:10:48.038274   25180 ssh_runner.go:152] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0915 04:10:48.119065   25180 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0915 04:10:48.157717   25180 ssh_runner.go:152] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0915 04:10:48.255687   25180 ssh_runner.go:152] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0915 04:10:48.355143   25180 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0915 04:10:48.355302   25180 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.2-rc.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0915 04:10:49.150153   25180 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.2-rc.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0915 04:10:46.449679   50332 ssh_runner.go:192] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (7.1730963s)
	I0915 04:10:46.449679   50332 ssh_runner.go:192] Completed: sudo systemctl is-active --quiet service kubelet: (7.1293338s)
	I0915 04:10:46.450446   50332 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.22.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0915 04:10:46.470716   50332 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20210915034637-22140
	I0915 04:10:47.325474   50332 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20210915034637-22140" to be "Ready" ...
	I0915 04:10:47.464016   50332 node_ready.go:49] node "default-k8s-different-port-20210915034637-22140" has status "Ready":"True"
	I0915 04:10:47.464255   50332 node_ready.go:38] duration metric: took 138.7816ms waiting for node "default-k8s-different-port-20210915034637-22140" to be "Ready" ...
	I0915 04:10:47.464255   50332 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 04:10:47.593034   50332 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-2ds62" in "kube-system" namespace to be "Ready" ...
	I0915 04:10:47.767050   50332 addons.go:337] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0915 04:10:47.767050   50332 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0915 04:10:48.465540   50332 addons.go:337] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0915 04:10:48.465730   50332 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0915 04:10:49.893312   50332 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0915 04:10:50.038424   50332 pod_ready.go:102] pod "coredns-78fcd69978-2ds62" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:50.767323   50332 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0915 04:10:49.001717   54608 pod_ready.go:102] pod "coredns-78fcd69978-bn7fz" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:51.034922   54608 pod_ready.go:102] pod "coredns-78fcd69978-bn7fz" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:53.470408   54608 pod_ready.go:102] pod "coredns-78fcd69978-bn7fz" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:49.194669   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:51.263818   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:53.264205   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:52.385115   50332 pod_ready.go:102] pod "coredns-78fcd69978-2ds62" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:52.413923   50332 addons.go:337] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0915 04:10:52.413923   50332 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0915 04:10:54.843619   50332 pod_ready.go:102] pod "coredns-78fcd69978-2ds62" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:55.155183   50332 addons.go:337] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0915 04:10:55.155380   50332 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0915 04:10:55.965914   54608 pod_ready.go:102] pod "coredns-78fcd69978-bn7fz" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:58.433465   54608 pod_ready.go:102] pod "coredns-78fcd69978-bn7fz" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:55.728784   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:57.753584   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:56.467519   25180 ssh_runner.go:192] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.2-rc.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (7.3170796s)
	I0915 04:10:56.467731   25180 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.2-rc.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0915 04:10:58.058529   25180 ssh_runner.go:192] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.2-rc.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml": (1.587969s)
	I0915 04:10:58.058760   25180 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.2-rc.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0915 04:10:59.068087   25180 ssh_runner.go:192] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.2-rc.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml": (1.0093306s)
	I0915 04:10:59.068275   25180 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.2-rc.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0915 04:10:59.739647   25180 api_server.go:50] waiting for apiserver process to appear ...
	I0915 04:10:59.761130   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:11:00.440235   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:11:00.939790   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:10:57.282116   50332 pod_ready.go:102] pod "coredns-78fcd69978-2ds62" in "kube-system" namespace has status "Ready":"False"
	I0915 04:10:59.325237   50332 pod_ready.go:102] pod "coredns-78fcd69978-2ds62" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:00.457027   54608 pod_ready.go:102] pod "coredns-78fcd69978-bn7fz" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:02.926355   54608 pod_ready.go:102] pod "coredns-78fcd69978-bn7fz" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:00.154932   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:02.159882   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:01.463715   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:11:01.932112   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:11:02.435368   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:11:02.926355   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:11:03.455875   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:11:03.951996   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:11:04.435578   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:11:04.938656   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:11:05.433533   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:11:05.939433   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:11:01.978673   50332 pod_ready.go:102] pod "coredns-78fcd69978-2ds62" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:03.226452   50332 addons.go:337] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0915 04:11:03.226619   50332 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0915 04:11:04.110219   50332 pod_ready.go:102] pod "coredns-78fcd69978-2ds62" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:06.331723   50332 pod_ready.go:102] pod "coredns-78fcd69978-2ds62" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:04.926581   54608 pod_ready.go:102] pod "coredns-78fcd69978-bn7fz" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:07.454221   54608 pod_ready.go:102] pod "coredns-78fcd69978-bn7fz" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:04.553958   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:06.698100   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:06.435029   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:11:06.928554   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:11:07.431956   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:11:07.938270   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:11:08.436440   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:11:08.936111   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:11:09.438546   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:11:09.955268   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:11:10.423630   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:11:10.941998   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:11:08.374379   50332 pod_ready.go:102] pod "coredns-78fcd69978-2ds62" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:10.584397   50332 pod_ready.go:102] pod "coredns-78fcd69978-2ds62" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:09.969481   54608 pod_ready.go:102] pod "coredns-78fcd69978-bn7fz" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:12.536220   54608 pod_ready.go:102] pod "coredns-78fcd69978-bn7fz" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:09.196630   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:11.239533   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:13.799398   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:11.434000   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:11:11.937861   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:11:12.436182   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:11:12.933673   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:11:13.433920   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:11:14.456253   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:11:14.940891   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:11:15.934360   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:11:12.849629   50332 pod_ready.go:102] pod "coredns-78fcd69978-2ds62" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:14.152755   50332 addons.go:337] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0915 04:11:14.153071   50332 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0915 04:11:15.082214   50332 pod_ready.go:102] pod "coredns-78fcd69978-2ds62" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:14.909701   54608 pod_ready.go:102] pod "coredns-78fcd69978-bn7fz" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:17.015616   54608 pod_ready.go:102] pod "coredns-78fcd69978-bn7fz" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:16.179578   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:18.384193   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:16.935826   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:11:17.965072   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:11:18.942120   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:11:19.926586   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:11:20.442433   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:11:20.933925   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:11:17.425914   50332 pod_ready.go:102] pod "coredns-78fcd69978-2ds62" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:19.848500   50332 pod_ready.go:102] pod "coredns-78fcd69978-2ds62" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:19.432895   54608 pod_ready.go:102] pod "coredns-78fcd69978-bn7fz" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:21.624536   54608 pod_ready.go:92] pod "coredns-78fcd69978-bn7fz" in "kube-system" namespace has status "Ready":"True"
	I0915 04:11:21.624536   54608 pod_ready.go:81] duration metric: took 1m30.8553594s waiting for pod "coredns-78fcd69978-bn7fz" in "kube-system" namespace to be "Ready" ...
	I0915 04:11:21.624745   54608 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-z7cv6" in "kube-system" namespace to be "Ready" ...
	I0915 04:11:22.160772   54608 pod_ready.go:92] pod "kube-proxy-z7cv6" in "kube-system" namespace has status "Ready":"True"
	I0915 04:11:22.160903   54608 pod_ready.go:81] duration metric: took 536.1599ms waiting for pod "kube-proxy-z7cv6" in "kube-system" namespace to be "Ready" ...
	I0915 04:11:22.160903   54608 pod_ready.go:38] duration metric: took 1m31.4859787s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 04:11:22.160903   54608 api_server.go:50] waiting for apiserver process to appear ...
	I0915 04:11:22.184393   54608 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 04:11:20.735115   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:23.192974   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:21.436011   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:11:21.948901   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:11:22.435704   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:11:23.453020   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:11:24.436344   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:11:25.489285   25180 ssh_runner.go:192] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.0529447s)
	I0915 04:11:25.940935   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:11:21.512933   50332 addons.go:337] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0915 04:11:21.513168   50332 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0915 04:11:22.414683   50332 pod_ready.go:102] pod "coredns-78fcd69978-2ds62" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:25.026065   50332 pod_ready.go:102] pod "coredns-78fcd69978-2ds62" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:26.845597   54608 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: (4.6610903s)
	I0915 04:11:26.845732   54608 logs.go:270] 1 containers: [9eef4f15a96f]
	I0915 04:11:26.848799   54608 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 04:11:26.362785   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:28.408808   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:26.440958   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:11:26.939507   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:11:28.210579   25180 ssh_runner.go:192] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.2709518s)
	I0915 04:11:28.448382   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:11:29.581927   25180 ssh_runner.go:192] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.1335491s)
	I0915 04:11:29.948318   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:11:27.415557   50332 pod_ready.go:102] pod "coredns-78fcd69978-2ds62" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:29.408239   50332 addons.go:337] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0915 04:11:29.408239   50332 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0915 04:11:29.417754   50332 pod_ready.go:102] pod "coredns-78fcd69978-2ds62" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:31.102835   50332 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (46.543707s)
	I0915 04:11:30.246409   54608 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: (3.3976222s)
	I0915 04:11:30.246757   54608 logs.go:270] 1 containers: [b039660af93e]
	I0915 04:11:30.275362   54608 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 04:11:31.949501   54608 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: (1.674145s)
	I0915 04:11:31.949501   54608 logs.go:270] 1 containers: [265438529b0d]
	I0915 04:11:31.972765   54608 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 04:11:30.487576   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:32.761907   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:31.214946   25180 ssh_runner.go:192] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.2666328s)
	I0915 04:11:31.436735   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:11:33.298368   25180 ssh_runner.go:192] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.8616399s)
	I0915 04:11:33.298368   25180 api_server.go:70] duration metric: took 33.5588419s to wait for apiserver process to appear ...
	I0915 04:11:33.298368   25180 api_server.go:86] waiting for apiserver healthz status ...
	I0915 04:11:33.298368   25180 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:59575/healthz ...
	I0915 04:11:33.323882   25180 api_server.go:255] stopped: https://127.0.0.1:59575/healthz: Get "https://127.0.0.1:59575/healthz": EOF
	I0915 04:11:33.825308   25180 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:59575/healthz ...
	I0915 04:11:31.669340   50332 pod_ready.go:102] pod "coredns-78fcd69978-2ds62" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:33.527132   50332 addons.go:337] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0915 04:11:33.527345   50332 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0915 04:11:34.537050   50332 pod_ready.go:102] pod "coredns-78fcd69978-2ds62" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:35.417680   54608 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: (3.4449276s)
	I0915 04:11:35.417854   54608 logs.go:270] 1 containers: [7b4900acc0f2]
	I0915 04:11:35.443960   54608 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 04:11:38.292338   54608 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: (2.8483884s)
	I0915 04:11:38.292338   54608 logs.go:270] 1 containers: [b39a4e1f4e1a]
	I0915 04:11:38.314765   54608 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0915 04:11:35.442032   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:37.569800   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:38.839396   25180 api_server.go:255] stopped: https://127.0.0.1:59575/healthz: Get "https://127.0.0.1:59575/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 04:11:39.325134   25180 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:59575/healthz ...
	I0915 04:11:36.884233   50332 pod_ready.go:102] pod "coredns-78fcd69978-2ds62" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:39.390461   50332 pod_ready.go:102] pod "coredns-78fcd69978-2ds62" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:39.585427   50332 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (49.6922957s)
	I0915 04:11:39.585427   50332 ssh_runner.go:192] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.22.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (53.1351743s)
	I0915 04:11:39.585427   50332 start.go:729] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0915 04:11:40.241506   50332 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0915 04:11:41.577736   54608 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}: (3.2629827s)
	I0915 04:11:41.577923   54608 logs.go:270] 1 containers: [c96424713549]
	I0915 04:11:41.594172   54608 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 04:11:43.551149   54608 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: (1.956984s)
	I0915 04:11:43.551327   54608 logs.go:270] 1 containers: [a966dd345dfd]
	I0915 04:11:43.565878   54608 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 04:11:39.752025   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:42.471364   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:44.327016   25180 api_server.go:255] stopped: https://127.0.0.1:59575/healthz: Get "https://127.0.0.1:59575/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 04:11:44.824025   25180 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:59575/healthz ...
	I0915 04:11:42.311672   50332 pod_ready.go:102] pod "coredns-78fcd69978-2ds62" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:45.951582   50332 pod_ready.go:102] pod "coredns-78fcd69978-2ds62" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:45.408349   54608 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: (1.8423238s)
	I0915 04:11:45.408349   54608 logs.go:270] 2 containers: [62008d343546 a4fcce6eda45]
	I0915 04:11:45.408349   54608 logs.go:123] Gathering logs for etcd [b039660af93e] ...
	I0915 04:11:45.408349   54608 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 b039660af93e"
	I0915 04:11:46.544082   54608 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 b039660af93e": (1.1357375s)
	I0915 04:11:46.627841   54608 logs.go:123] Gathering logs for kubernetes-dashboard [c96424713549] ...
	I0915 04:11:46.627841   54608 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 c96424713549"
	I0915 04:11:48.713744   54608 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 c96424713549": (2.0859105s)
	I0915 04:11:48.714734   54608 logs.go:123] Gathering logs for Docker ...
	I0915 04:11:48.714734   54608 ssh_runner.go:152] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0915 04:11:44.690384   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:46.691718   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:48.863280   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:49.836003   25180 api_server.go:255] stopped: https://127.0.0.1:59575/healthz: Get "https://127.0.0.1:59575/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 04:11:50.325032   25180 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:59575/healthz ...
	I0915 04:11:48.596281   50332 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (57.8287702s)
	I0915 04:11:48.596440   50332 addons.go:375] Verifying addon metrics-server=true in "default-k8s-different-port-20210915034637-22140"
	I0915 04:11:48.671457   50332 pod_ready.go:102] pod "coredns-78fcd69978-2ds62" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:50.970484   50332 pod_ready.go:102] pod "coredns-78fcd69978-2ds62" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:49.461586   54608 logs.go:123] Gathering logs for kube-apiserver [9eef4f15a96f] ...
	I0915 04:11:49.461586   54608 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 9eef4f15a96f"
	I0915 04:11:53.724766   54608 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 9eef4f15a96f": (4.2630249s)
	I0915 04:11:53.747819   54608 logs.go:123] Gathering logs for dmesg ...
	I0915 04:11:53.747819   54608 ssh_runner.go:152] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 04:11:51.192752   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:53.352560   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:55.326434   25180 api_server.go:255] stopped: https://127.0.0.1:59575/healthz: Get "https://127.0.0.1:59575/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 04:11:55.823980   25180 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:59575/healthz ...
	I0915 04:11:53.362506   50332 pod_ready.go:102] pod "coredns-78fcd69978-2ds62" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:55.412816   50332 pod_ready.go:102] pod "coredns-78fcd69978-2ds62" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:54.299625   54608 logs.go:123] Gathering logs for describe nodes ...
	I0915 04:11:54.299625   54608 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 04:11:55.358374   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:11:57.878674   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:12:00.826142   25180 api_server.go:255] stopped: https://127.0.0.1:59575/healthz: Get "https://127.0.0.1:59575/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 04:11:58.418187   50332 pod_ready.go:102] pod "coredns-78fcd69978-2ds62" in "kube-system" namespace has status "Ready":"False"
	I0915 04:12:00.613661   50332 pod_ready.go:102] pod "coredns-78fcd69978-2ds62" in "kube-system" namespace has status "Ready":"False"
	I0915 04:12:01.205213   54608 ssh_runner.go:192] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (6.9056122s)
	I0915 04:12:01.209959   54608 logs.go:123] Gathering logs for kube-scheduler [7b4900acc0f2] ...
	I0915 04:12:01.210118   54608 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 7b4900acc0f2"
	I0915 04:12:00.145909   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:12:02.187089   24768 pod_ready.go:102] pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace has status "Ready":"False"
	I0915 04:12:03.095326   24768 pod_ready.go:81] duration metric: took 4m0.1114941s waiting for pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace to be "Ready" ...
	E0915 04:12:03.095494   24768 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-j54zl" in "kube-system" namespace to be "Ready" (will not retry!)
	I0915 04:12:03.096444   24768 pod_ready.go:38] duration metric: took 4m45.0165876s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 04:12:03.096444   24768 kubeadm.go:604] restartCluster took 6m42.6159819s
	W0915 04:12:03.097266   24768 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0915 04:12:03.097612   24768 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.2-rc.0:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0915 04:12:01.324355   25180 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:59575/healthz ...
	I0915 04:12:03.423392   50332 pod_ready.go:102] pod "coredns-78fcd69978-2ds62" in "kube-system" namespace has status "Ready":"False"
	I0915 04:12:05.570643   50332 pod_ready.go:102] pod "coredns-78fcd69978-2ds62" in "kube-system" namespace has status "Ready":"False"
	I0915 04:12:07.629295   50332 pod_ready.go:92] pod "coredns-78fcd69978-2ds62" in "kube-system" namespace has status "Ready":"True"
	I0915 04:12:07.629440   50332 pod_ready.go:81] duration metric: took 1m20.0366959s waiting for pod "coredns-78fcd69978-2ds62" in "kube-system" namespace to be "Ready" ...
	I0915 04:12:07.629440   50332 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-66wgz" in "kube-system" namespace to be "Ready" ...
	I0915 04:12:07.831272   50332 pod_ready.go:92] pod "coredns-78fcd69978-66wgz" in "kube-system" namespace has status "Ready":"True"
	I0915 04:12:07.831272   50332 pod_ready.go:81] duration metric: took 201.8329ms waiting for pod "coredns-78fcd69978-66wgz" in "kube-system" namespace to be "Ready" ...
	I0915 04:12:07.831272   50332 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-different-port-20210915034637-22140" in "kube-system" namespace to be "Ready" ...
	I0915 04:12:08.085682   50332 pod_ready.go:92] pod "etcd-default-k8s-different-port-20210915034637-22140" in "kube-system" namespace has status "Ready":"True"
	I0915 04:12:08.085682   50332 pod_ready.go:81] duration metric: took 254.4108ms waiting for pod "etcd-default-k8s-different-port-20210915034637-22140" in "kube-system" namespace to be "Ready" ...
	I0915 04:12:08.085682   50332 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-different-port-20210915034637-22140" in "kube-system" namespace to be "Ready" ...
	I0915 04:12:08.268814   50332 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (28.0270002s)
	I0915 04:12:04.005382   54608 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 7b4900acc0f2": (2.7952742s)
	I0915 04:12:04.023007   54608 logs.go:123] Gathering logs for kube-proxy [b39a4e1f4e1a] ...
	I0915 04:12:04.023233   54608 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 b39a4e1f4e1a"
	I0915 04:12:05.893292   54608 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 b39a4e1f4e1a": (1.8700657s)
	I0915 04:12:05.895347   54608 logs.go:123] Gathering logs for storage-provisioner [a966dd345dfd] ...
	I0915 04:12:05.895608   54608 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 a966dd345dfd"
	I0915 04:12:08.285716   54608 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 a966dd345dfd": (2.390116s)
	I0915 04:12:08.287048   54608 logs.go:123] Gathering logs for kubelet ...
	I0915 04:12:08.287219   54608 ssh_runner.go:152] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 04:12:06.326070   25180 api_server.go:255] stopped: https://127.0.0.1:59575/healthz: Get "https://127.0.0.1:59575/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 04:12:06.834056   25180 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:59575/healthz ...
	I0915 04:12:08.272149   50332 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0915 04:12:08.272325   50332 addons.go:406] enableAddons completed in 1m28.996437s
	I0915 04:12:09.015925   50332 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20210915034637-22140" in "kube-system" namespace has status "Ready":"True"
	I0915 04:12:09.016059   50332 pod_ready.go:81] duration metric: took 930.2461ms waiting for pod "kube-apiserver-default-k8s-different-port-20210915034637-22140" in "kube-system" namespace to be "Ready" ...
	I0915 04:12:09.016059   50332 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-different-port-20210915034637-22140" in "kube-system" namespace to be "Ready" ...
	I0915 04:12:09.120070   50332 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20210915034637-22140" in "kube-system" namespace has status "Ready":"True"
	I0915 04:12:09.120070   50332 pod_ready.go:81] duration metric: took 104.0115ms waiting for pod "kube-controller-manager-default-k8s-different-port-20210915034637-22140" in "kube-system" namespace to be "Ready" ...
	I0915 04:12:09.120070   50332 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2dt6m" in "kube-system" namespace to be "Ready" ...
	I0915 04:12:09.293308   50332 pod_ready.go:92] pod "kube-proxy-2dt6m" in "kube-system" namespace has status "Ready":"True"
	I0915 04:12:09.293522   50332 pod_ready.go:81] duration metric: took 173.2389ms waiting for pod "kube-proxy-2dt6m" in "kube-system" namespace to be "Ready" ...
	I0915 04:12:09.293522   50332 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-different-port-20210915034637-22140" in "kube-system" namespace to be "Ready" ...
	I0915 04:12:09.417503   50332 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20210915034637-22140" in "kube-system" namespace has status "Ready":"True"
	I0915 04:12:09.417503   50332 pod_ready.go:81] duration metric: took 123.9814ms waiting for pod "kube-scheduler-default-k8s-different-port-20210915034637-22140" in "kube-system" namespace to be "Ready" ...
	I0915 04:12:09.417503   50332 pod_ready.go:38] duration metric: took 1m21.9535448s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 04:12:09.417503   50332 api_server.go:50] waiting for apiserver process to appear ...
	I0915 04:12:09.434372   50332 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 04:12:09.251674   54608 logs.go:123] Gathering logs for kube-controller-manager [a4fcce6eda45] ...
	I0915 04:12:09.251674   54608 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 a4fcce6eda45"
	I0915 04:12:10.297366   54608 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 a4fcce6eda45": (1.045696s)
	I0915 04:12:10.316708   54608 logs.go:123] Gathering logs for container status ...
	I0915 04:12:10.316708   54608 ssh_runner.go:152] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 04:12:11.953435   54608 ssh_runner.go:192] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (1.6367327s)
	I0915 04:12:11.954414   54608 logs.go:123] Gathering logs for kube-controller-manager [62008d343546] ...
	I0915 04:12:11.954781   54608 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 62008d343546"
	I0915 04:12:13.557678   54608 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 62008d343546": (1.6029034s)
	I0915 04:12:13.578532   54608 logs.go:123] Gathering logs for coredns [265438529b0d] ...
	I0915 04:12:13.578834   54608 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 265438529b0d"
	I0915 04:12:11.834861   25180 api_server.go:255] stopped: https://127.0.0.1:59575/healthz: Get "https://127.0.0.1:59575/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 04:12:12.324437   25180 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:59575/healthz ...
	I0915 04:12:12.095071   50332 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: (2.6605493s)
	I0915 04:12:12.095200   50332 logs.go:270] 1 containers: [8de9faa1fcfa]
	I0915 04:12:12.108063   50332 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 04:12:13.885827   50332 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: (1.7775625s)
	I0915 04:12:13.885827   50332 logs.go:270] 1 containers: [5795606a4eee]
	I0915 04:12:13.895856   50332 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 04:12:14.892950   50332 logs.go:270] 2 containers: [7c72ae242581 f85c0e7f452e]
	I0915 04:12:14.904144   50332 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 04:12:16.156998   50332 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: (1.2528582s)
	I0915 04:12:16.156998   50332 logs.go:270] 1 containers: [739aa7fb4f15]
	I0915 04:12:16.169341   50332 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 04:12:16.727020   54608 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:12:17.399496   54608 api_server.go:70] duration metric: took 2m35.01256s to wait for apiserver process to appear ...
	I0915 04:12:17.399650   54608 api_server.go:86] waiting for apiserver healthz status ...
	I0915 04:12:17.417242   54608 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 04:12:17.326274   25180 api_server.go:255] stopped: https://127.0.0.1:59575/healthz: Get "https://127.0.0.1:59575/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 04:12:17.825071   25180 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:59575/healthz ...
	I0915 04:12:18.012497   50332 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: (1.8431622s)
	I0915 04:12:18.012497   50332 logs.go:270] 1 containers: [c690df0ad35a]
	I0915 04:12:18.034637   50332 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0915 04:12:19.254582   50332 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}: (1.2199494s)
	I0915 04:12:19.254582   50332 logs.go:270] 0 containers: []
	W0915 04:12:19.254582   50332 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0915 04:12:19.265076   50332 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 04:12:20.710381   50332 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: (1.4453104s)
	I0915 04:12:20.710569   50332 logs.go:270] 1 containers: [86f3b30e297b]
	I0915 04:12:20.726724   50332 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 04:12:18.953952   54608 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: (1.5364843s)
	I0915 04:12:18.953952   54608 logs.go:270] 1 containers: [9eef4f15a96f]
	I0915 04:12:18.973904   54608 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 04:12:19.533265   54608 logs.go:270] 1 containers: [b039660af93e]
	I0915 04:12:19.550971   54608 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 04:12:20.215986   54608 logs.go:270] 1 containers: [265438529b0d]
	I0915 04:12:20.228049   54608 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 04:12:21.447185   54608 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: (1.2179607s)
	I0915 04:12:21.447486   54608 logs.go:270] 1 containers: [7b4900acc0f2]
	I0915 04:12:21.453037   54608 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 04:12:22.365809   54608 logs.go:270] 1 containers: [b39a4e1f4e1a]
	I0915 04:12:22.378874   54608 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0915 04:12:21.777545   25180 api_server.go:265] https://127.0.0.1:59575/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0915 04:12:21.777545   25180 api_server.go:101] status: https://127.0.0.1:59575/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0915 04:12:21.824951   25180 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:59575/healthz ...
	I0915 04:12:22.616486   25180 api_server.go:265] https://127.0.0.1:59575/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0915 04:12:22.616486   25180 api_server.go:101] status: https://127.0.0.1:59575/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0915 04:12:22.824919   25180 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:59575/healthz ...
	I0915 04:12:22.952831   25180 api_server.go:265] https://127.0.0.1:59575/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0915 04:12:22.953319   25180 api_server.go:101] status: https://127.0.0.1:59575/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0915 04:12:23.325843   25180 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:59575/healthz ...
	I0915 04:12:23.593139   25180 api_server.go:265] https://127.0.0.1:59575/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0915 04:12:23.593393   25180 api_server.go:101] status: https://127.0.0.1:59575/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0915 04:12:23.825034   25180 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:59575/healthz ...
	I0915 04:12:24.086250   25180 api_server.go:265] https://127.0.0.1:59575/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0915 04:12:24.086873   25180 api_server.go:101] status: https://127.0.0.1:59575/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0915 04:12:24.323790   25180 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:59575/healthz ...
	I0915 04:12:24.410221   25180 api_server.go:265] https://127.0.0.1:59575/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0915 04:12:24.412000   25180 api_server.go:101] status: https://127.0.0.1:59575/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0915 04:12:24.824181   25180 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:59575/healthz ...
	I0915 04:12:24.934733   25180 api_server.go:265] https://127.0.0.1:59575/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0915 04:12:24.934733   25180 api_server.go:101] status: https://127.0.0.1:59575/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0915 04:12:25.324959   25180 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:59575/healthz ...
	I0915 04:12:25.506548   25180 api_server.go:265] https://127.0.0.1:59575/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0915 04:12:25.506548   25180 api_server.go:101] status: https://127.0.0.1:59575/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0915 04:12:25.824583   25180 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:59575/healthz ...
	I0915 04:12:25.958619   25180 api_server.go:265] https://127.0.0.1:59575/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0915 04:12:25.958619   25180 api_server.go:101] status: https://127.0.0.1:59575/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0915 04:12:23.365388   50332 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: (2.6386736s)
	I0915 04:12:23.365388   50332 logs.go:270] 2 containers: [5fc3193ec4b2 562b32868c8b]
	I0915 04:12:23.365388   50332 logs.go:123] Gathering logs for dmesg ...
	I0915 04:12:23.365388   50332 ssh_runner.go:152] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 04:12:23.814124   50332 logs.go:123] Gathering logs for kube-apiserver [8de9faa1fcfa] ...
	I0915 04:12:23.814124   50332 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 8de9faa1fcfa"
	I0915 04:12:24.561075   54608 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}: (2.1822093s)
	I0915 04:12:24.561598   54608 logs.go:270] 1 containers: [c96424713549]
	I0915 04:12:24.564315   54608 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 04:12:25.470574   54608 logs.go:270] 1 containers: [a966dd345dfd]
	I0915 04:12:25.499637   54608 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 04:12:26.269773   54608 logs.go:270] 2 containers: [62008d343546 a4fcce6eda45]
	I0915 04:12:26.270189   54608 logs.go:123] Gathering logs for kubelet ...
	I0915 04:12:26.270189   54608 ssh_runner.go:152] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 04:12:27.280663   54608 ssh_runner.go:192] Completed: /bin/bash -c "sudo journalctl -u kubelet -n 400": (1.0103486s)
	I0915 04:12:27.377892   54608 logs.go:123] Gathering logs for kube-apiserver [9eef4f15a96f] ...
	I0915 04:12:27.377892   54608 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 9eef4f15a96f"
	I0915 04:12:26.325023   25180 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:59575/healthz ...
	I0915 04:12:26.434357   25180 api_server.go:265] https://127.0.0.1:59575/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0915 04:12:26.434792   25180 api_server.go:101] status: https://127.0.0.1:59575/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0915 04:12:26.826023   25180 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:59575/healthz ...
	I0915 04:12:26.995629   25180 api_server.go:265] https://127.0.0.1:59575/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0915 04:12:26.995919   25180 api_server.go:101] status: https://127.0.0.1:59575/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0915 04:12:27.323909   25180 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:59575/healthz ...
	I0915 04:12:27.476922   25180 api_server.go:265] https://127.0.0.1:59575/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0915 04:12:27.477903   25180 api_server.go:101] status: https://127.0.0.1:59575/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0915 04:12:27.825791   25180 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:59575/healthz ...
	I0915 04:12:27.960896   25180 api_server.go:265] https://127.0.0.1:59575/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0915 04:12:27.960896   25180 api_server.go:101] status: https://127.0.0.1:59575/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0915 04:12:28.324682   25180 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:59575/healthz ...
	I0915 04:12:28.406240   25180 api_server.go:265] https://127.0.0.1:59575/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0915 04:12:28.406597   25180 api_server.go:101] status: https://127.0.0.1:59575/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0915 04:12:28.824358   25180 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:59575/healthz ...
	I0915 04:12:28.927152   25180 api_server.go:265] https://127.0.0.1:59575/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0915 04:12:28.927152   25180 api_server.go:101] status: https://127.0.0.1:59575/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0915 04:12:29.326544   25180 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:59575/healthz ...
	I0915 04:12:29.440576   25180 api_server.go:265] https://127.0.0.1:59575/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0915 04:12:29.440576   25180 api_server.go:101] status: https://127.0.0.1:59575/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0915 04:12:29.825294   25180 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:59575/healthz ...
	I0915 04:12:30.078993   25180 api_server.go:265] https://127.0.0.1:59575/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0915 04:12:30.078993   25180 api_server.go:101] status: https://127.0.0.1:59575/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0915 04:12:30.324153   25180 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:59575/healthz ...
	I0915 04:12:30.390376   25180 api_server.go:265] https://127.0.0.1:59575/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0915 04:12:30.390376   25180 api_server.go:101] status: https://127.0.0.1:59575/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0915 04:12:30.824009   25180 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:59575/healthz ...
	I0915 04:12:30.909895   25180 api_server.go:265] https://127.0.0.1:59575/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0915 04:12:30.909895   25180 api_server.go:101] status: https://127.0.0.1:59575/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0915 04:12:27.880728   50332 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 8de9faa1fcfa": (4.0666186s)
	I0915 04:12:27.907611   50332 logs.go:123] Gathering logs for coredns [7c72ae242581] ...
	I0915 04:12:27.907973   50332 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 7c72ae242581"
	I0915 04:12:29.197612   50332 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 7c72ae242581": (1.2896437s)
	I0915 04:12:29.197612   50332 logs.go:123] Gathering logs for kube-controller-manager [5fc3193ec4b2] ...
	I0915 04:12:29.197612   50332 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 5fc3193ec4b2"
	I0915 04:12:30.001242   54608 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 9eef4f15a96f": (2.6233596s)
	I0915 04:12:30.036989   54608 logs.go:123] Gathering logs for storage-provisioner [a966dd345dfd] ...
	I0915 04:12:30.036989   54608 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 a966dd345dfd"
	I0915 04:12:31.763538   54608 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 a966dd345dfd": (1.726555s)
	I0915 04:12:31.764854   54608 logs.go:123] Gathering logs for Docker ...
	I0915 04:12:31.764989   54608 ssh_runner.go:152] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0915 04:12:32.415744   54608 logs.go:123] Gathering logs for dmesg ...
	I0915 04:12:32.415916   54608 ssh_runner.go:152] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 04:12:33.355440   54608 logs.go:123] Gathering logs for describe nodes ...
	I0915 04:12:33.355440   54608 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 04:12:31.324113   25180 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:59575/healthz ...
	I0915 04:12:31.418915   25180 api_server.go:265] https://127.0.0.1:59575/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0915 04:12:31.418915   25180 api_server.go:101] status: https://127.0.0.1:59575/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0915 04:12:31.823706   25180 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:59575/healthz ...
	I0915 04:12:32.414181   25180 api_server.go:265] https://127.0.0.1:59575/healthz returned 200:
	ok
	I0915 04:12:32.508057   25180 api_server.go:139] control plane version: v1.22.2-rc.0
	I0915 04:12:32.508057   25180 api_server.go:129] duration metric: took 59.209902s to wait for apiserver health ...
	I0915 04:12:32.508247   25180 cni.go:93] Creating CNI manager for ""
	I0915 04:12:32.508381   25180 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0915 04:12:32.508381   25180 system_pods.go:43] waiting for kube-system pods to appear ...
	I0915 04:12:32.626124   25180 system_pods.go:59] 8 kube-system pods found
	I0915 04:12:32.626124   25180 system_pods.go:61] "coredns-78fcd69978-d9rxt" [797e8547-2434-4314-85a0-05684a4dff7d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0915 04:12:32.626124   25180 system_pods.go:61] "etcd-newest-cni-20210915040258-22140" [46a24220-ff27-41ee-8fe6-7b8bd12a34d0] Running
	I0915 04:12:32.626124   25180 system_pods.go:61] "kube-apiserver-newest-cni-20210915040258-22140" [b202b335-37b0-4812-93cd-83ec697e4714] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0915 04:12:32.626124   25180 system_pods.go:61] "kube-controller-manager-newest-cni-20210915040258-22140" [dd215459-34ba-4fea-baa3-600daa15f30d] Running
	I0915 04:12:32.626124   25180 system_pods.go:61] "kube-proxy-kqwtr" [6f658bfb-d105-4855-bfec-a8a2ca888629] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0915 04:12:32.626124   25180 system_pods.go:61] "kube-scheduler-newest-cni-20210915040258-22140" [070752a7-0be4-43f1-955e-24562cff6675] Running
	I0915 04:12:32.626124   25180 system_pods.go:61] "metrics-server-7c784ccb57-jr9df" [ba2e9edd-4120-4ee1-a671-ffa1f9a6d8b4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0915 04:12:32.626836   25180 system_pods.go:61] "storage-provisioner" [f23ed07a-c5da-4df9-afd7-222a1872a30d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0915 04:12:32.626836   25180 system_pods.go:74] duration metric: took 118.4556ms to wait for pod list to return data ...
	I0915 04:12:32.626836   25180 node_conditions.go:102] verifying NodePressure condition ...
	I0915 04:12:32.647175   25180 node_conditions.go:122] node storage ephemeral capacity is 65792556Ki
	I0915 04:12:32.647364   25180 node_conditions.go:123] node cpu capacity is 4
	I0915 04:12:32.647521   25180 node_conditions.go:105] duration metric: took 20.6849ms to run NodePressure ...
	I0915 04:12:32.647521   25180 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.2-rc.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0915 04:12:34.216050   50332 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 5fc3193ec4b2": (5.0184565s)
	I0915 04:12:34.238814   50332 logs.go:123] Gathering logs for container status ...
	I0915 04:12:34.238814   50332 ssh_runner.go:152] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 04:12:36.058042   50332 ssh_runner.go:192] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (1.8192346s)
	I0915 04:12:36.059476   50332 logs.go:123] Gathering logs for kubelet ...
	I0915 04:12:36.059476   50332 ssh_runner.go:152] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 04:12:39.949389   25180 ssh_runner.go:192] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.2-rc.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (7.3018942s)
	I0915 04:12:39.949579   25180 ssh_runner.go:152] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0915 04:12:40.765834   25180 ops.go:34] apiserver oom_adj: -16
	I0915 04:12:40.766089   25180 kubeadm.go:604] restartCluster took 1m59.1069487s
	I0915 04:12:40.766089   25180 kubeadm.go:392] StartCluster complete in 2m0.0386449s
	I0915 04:12:40.766247   25180 settings.go:142] acquiring lock: {Name:mk81656fcf8bcddd49caaa1adb1c177165a02100 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 04:12:40.766520   25180 settings.go:150] Updating kubeconfig:  C:\Users\jenkins\minikube-integration\kubeconfig
	I0915 04:12:40.779708   25180 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\kubeconfig: {Name:mk312e0248780fd448f3a83862df8ee597f47373 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 04:12:40.946739   25180 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20210915040258-22140" rescaled to 1
	I0915 04:12:40.947501   25180 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.22.2-rc.0 ControlPlane:true Worker:true}
	I0915 04:12:40.950706   25180 out.go:177] * Verifying Kubernetes components...
	I0915 04:12:40.948059   25180 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.2-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0915 04:12:40.949255   25180 config.go:177] Loaded profile config "newest-cni-20210915040258-22140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.2-rc.0
	I0915 04:12:40.948634   25180 addons.go:404] enableAddons start: toEnable=map[dashboard:true metrics-server:true storage-provisioner:true], additional=[]
	I0915 04:12:40.951630   25180 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-20210915040258-22140"
	I0915 04:12:40.951856   25180 addons.go:65] Setting default-storageclass=true in profile "newest-cni-20210915040258-22140"
	I0915 04:12:40.952020   25180 addons.go:153] Setting addon storage-provisioner=true in "newest-cni-20210915040258-22140"
	W0915 04:12:40.952020   25180 addons.go:165] addon storage-provisioner should already be in state true
	I0915 04:12:40.952020   25180 addons.go:65] Setting dashboard=true in profile "newest-cni-20210915040258-22140"
	I0915 04:12:40.952407   25180 addons.go:153] Setting addon dashboard=true in "newest-cni-20210915040258-22140"
	W0915 04:12:40.952407   25180 addons.go:165] addon dashboard should already be in state true
	I0915 04:12:40.952020   25180 addons.go:65] Setting metrics-server=true in profile "newest-cni-20210915040258-22140"
	I0915 04:12:40.952407   25180 host.go:66] Checking if "newest-cni-20210915040258-22140" exists ...
	I0915 04:12:40.952407   25180 host.go:66] Checking if "newest-cni-20210915040258-22140" exists ...
	I0915 04:12:40.952244   25180 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20210915040258-22140"
	I0915 04:12:40.952407   25180 addons.go:153] Setting addon metrics-server=true in "newest-cni-20210915040258-22140"
	W0915 04:12:40.953147   25180 addons.go:165] addon metrics-server should already be in state true
	I0915 04:12:40.953468   25180 host.go:66] Checking if "newest-cni-20210915040258-22140" exists ...
	I0915 04:12:40.973954   25180 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
	I0915 04:12:40.988683   25180 cli_runner.go:115] Run: docker container inspect newest-cni-20210915040258-22140 --format={{.State.Status}}
	I0915 04:12:40.993185   25180 cli_runner.go:115] Run: docker container inspect newest-cni-20210915040258-22140 --format={{.State.Status}}
	I0915 04:12:40.995340   25180 cli_runner.go:115] Run: docker container inspect newest-cni-20210915040258-22140 --format={{.State.Status}}
	I0915 04:12:40.998657   25180 cli_runner.go:115] Run: docker container inspect newest-cni-20210915040258-22140 --format={{.State.Status}}
	I0915 04:12:37.281594   50332 ssh_runner.go:192] Completed: /bin/bash -c "sudo journalctl -u kubelet -n 400": (1.22187s)
	I0915 04:12:37.381349   50332 logs.go:123] Gathering logs for describe nodes ...
	I0915 04:12:37.382292   50332 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 04:12:42.204144   25180 cli_runner.go:168] Completed: docker container inspect newest-cni-20210915040258-22140 --format={{.State.Status}}: (1.2108302s)
	I0915 04:12:42.208414   25180 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0915 04:12:42.209592   25180 addons.go:337] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 04:12:42.209592   25180 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0915 04:12:42.213902   25180 cli_runner.go:168] Completed: docker container inspect newest-cni-20210915040258-22140 --format={{.State.Status}}: (1.2152497s)
	I0915 04:12:42.213902   25180 cli_runner.go:168] Completed: docker container inspect newest-cni-20210915040258-22140 --format={{.State.Status}}: (1.2250416s)
	I0915 04:12:42.217503   25180 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0915 04:12:42.219914   25180 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0915 04:12:42.220381   25180 addons.go:337] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0915 04:12:42.220381   25180 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0915 04:12:39.135104   54608 ssh_runner.go:192] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (5.7795082s)
	I0915 04:12:39.143329   54608 logs.go:123] Gathering logs for kube-proxy [b39a4e1f4e1a] ...
	I0915 04:12:39.143544   54608 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 b39a4e1f4e1a"
	I0915 04:12:41.973939   54608 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 b39a4e1f4e1a": (2.8304046s)
	I0915 04:12:41.974723   54608 logs.go:123] Gathering logs for kube-controller-manager [62008d343546] ...
	I0915 04:12:41.974723   54608 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 62008d343546"
	I0915 04:12:42.222526   25180 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0915 04:12:42.222983   25180 addons.go:337] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0915 04:12:42.222983   25180 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0915 04:12:42.229908   25180 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210915040258-22140
	I0915 04:12:42.244878   25180 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210915040258-22140
	I0915 04:12:42.248881   25180 cli_runner.go:168] Completed: docker container inspect newest-cni-20210915040258-22140 --format={{.State.Status}}: (1.2535454s)
	I0915 04:12:42.258564   25180 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210915040258-22140
	I0915 04:12:43.167992   25180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59571 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\newest-cni-20210915040258-22140\id_rsa Username:docker}
	I0915 04:12:43.172047   25180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59571 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\newest-cni-20210915040258-22140\id_rsa Username:docker}
	I0915 04:12:43.209020   25180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59571 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\newest-cni-20210915040258-22140\id_rsa Username:docker}
	I0915 04:12:44.082043   25180 addons.go:153] Setting addon default-storageclass=true in "newest-cni-20210915040258-22140"
	W0915 04:12:44.082043   25180 addons.go:165] addon default-storageclass should already be in state true
	I0915 04:12:44.082252   25180 host.go:66] Checking if "newest-cni-20210915040258-22140" exists ...
	I0915 04:12:44.111156   25180 cli_runner.go:115] Run: docker container inspect newest-cni-20210915040258-22140 --format={{.State.Status}}
	I0915 04:12:44.962256   25180 addons.go:337] installing /etc/kubernetes/addons/storageclass.yaml
	I0915 04:12:44.962256   25180 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0915 04:12:44.980649   25180 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210915040258-22140
	I0915 04:12:45.870872   25180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59571 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\newest-cni-20210915040258-22140\id_rsa Username:docker}
	I0915 04:12:43.356446   50332 ssh_runner.go:192] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (5.9740404s)
	I0915 04:12:43.362401   50332 logs.go:123] Gathering logs for kube-scheduler [739aa7fb4f15] ...
	I0915 04:12:43.362866   50332 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 739aa7fb4f15"
	I0915 04:12:46.193039   54608 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 62008d343546": (4.2183313s)
	I0915 04:12:46.214003   54608 logs.go:123] Gathering logs for kube-controller-manager [a4fcce6eda45] ...
	I0915 04:12:46.214003   54608 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 a4fcce6eda45"
	I0915 04:12:48.733680   54608 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 a4fcce6eda45": (2.5196855s)
	I0915 04:12:48.752490   54608 logs.go:123] Gathering logs for container status ...
	I0915 04:12:48.752490   54608 ssh_runner.go:152] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 04:12:50.294853   25180 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.2-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0915 04:12:50.413346   25180 addons.go:337] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0915 04:12:50.413642   25180 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0915 04:12:47.116958   50332 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 739aa7fb4f15": (3.7536372s)
	I0915 04:12:47.136744   50332 logs.go:123] Gathering logs for kube-proxy [c690df0ad35a] ...
	I0915 04:12:47.136744   50332 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 c690df0ad35a"
	I0915 04:12:48.058658   50332 logs.go:123] Gathering logs for storage-provisioner [86f3b30e297b] ...
	I0915 04:12:48.058658   50332 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 86f3b30e297b"
	I0915 04:12:50.081486   50332 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 86f3b30e297b": (2.0228346s)
	I0915 04:12:50.083012   50332 logs.go:123] Gathering logs for Docker ...
	I0915 04:12:50.083012   50332 ssh_runner.go:152] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0915 04:12:50.613719   50332 logs.go:123] Gathering logs for etcd [5795606a4eee] ...
	I0915 04:12:50.613719   50332 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 5795606a4eee"
	I0915 04:12:51.071465   54608 ssh_runner.go:192] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.3189835s)
	I0915 04:12:51.071465   54608 logs.go:123] Gathering logs for etcd [b039660af93e] ...
	I0915 04:12:51.071465   54608 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 b039660af93e"
	I0915 04:12:51.422904   25180 addons.go:337] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0915 04:12:51.422904   25180 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0915 04:12:54.208421   25180 addons.go:337] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0915 04:12:54.208550   25180 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0915 04:12:54.745224   25180 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.2-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 04:12:55.647410   25180 addons.go:337] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0915 04:12:55.647505   25180 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0915 04:12:55.011987   50332 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 5795606a4eee": (4.3982836s)
	I0915 04:12:55.107586   50332 logs.go:123] Gathering logs for coredns [f85c0e7f452e] ...
	I0915 04:12:55.107586   50332 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 f85c0e7f452e"
	I0915 04:12:56.598366   54608 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 b039660af93e": (5.526808s)
	I0915 04:12:56.664425   54608 logs.go:123] Gathering logs for kubernetes-dashboard [c96424713549] ...
	I0915 04:12:56.664425   54608 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 c96424713549"
	I0915 04:12:58.682020   54608 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 c96424713549": (2.017602s)
	I0915 04:12:58.682020   54608 logs.go:123] Gathering logs for coredns [265438529b0d] ...
	I0915 04:12:58.682020   54608 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 265438529b0d"
	I0915 04:12:57.985810   25180 addons.go:337] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0915 04:12:57.985933   25180 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0915 04:12:57.021265   50332 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 f85c0e7f452e": (1.9136858s)
	W0915 04:12:57.021694   50332 logs.go:130] failed coredns [f85c0e7f452e]: command: /bin/bash -c "docker logs --tail 400 f85c0e7f452e" /bin/bash -c "docker logs --tail 400 f85c0e7f452e": Process exited with status 1
	stdout:
	
	stderr:
	Error: No such container: f85c0e7f452e
	 output: 
	** stderr ** 
	Error: No such container: f85c0e7f452e
	
	** /stderr **
	I0915 04:12:57.021694   50332 logs.go:123] Gathering logs for kube-controller-manager [562b32868c8b] ...
	I0915 04:12:57.021694   50332 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 562b32868c8b"
	I0915 04:13:00.033976   50332 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 562b32868c8b": (3.0122926s)
	I0915 04:12:59.516581   54608 logs.go:123] Gathering logs for kube-scheduler [7b4900acc0f2] ...
	I0915 04:12:59.516581   54608 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 7b4900acc0f2"
	I0915 04:13:00.763312   54608 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 7b4900acc0f2": (1.2467361s)
	I0915 04:13:03.277706   54608 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:59340/healthz ...
	I0915 04:13:03.388480   54608 api_server.go:265] https://127.0.0.1:59340/healthz returned 200:
	ok
	I0915 04:13:03.508461   54608 api_server.go:139] control plane version: v1.22.1
	I0915 04:13:03.508461   54608 api_server.go:129] duration metric: took 46.1089739s to wait for apiserver health ...
	I0915 04:13:03.508461   54608 system_pods.go:43] waiting for kube-system pods to appear ...
	I0915 04:13:03.517457   54608 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 04:13:01.603415   25180 addons.go:337] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0915 04:13:01.603415   25180 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0915 04:13:03.081342   25180 ssh_runner.go:192] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.2-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (22.1304374s)
	I0915 04:13:03.081655   25180 ssh_runner.go:192] Completed: sudo systemctl is-active --quiet service kubelet: (22.1075742s)
	I0915 04:13:03.081827   25180 start.go:709] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0915 04:13:03.101937   25180 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20210915040258-22140
	I0915 04:13:03.973426   25180 api_server.go:50] waiting for apiserver process to appear ...
	I0915 04:13:03.995920   25180 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:13:04.133837   25180 addons.go:337] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0915 04:13:04.133837   25180 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0915 04:13:02.598842   50332 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 04:13:03.738973   50332 ssh_runner.go:192] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.1401355s)
	I0915 04:13:03.739235   50332 api_server.go:70] duration metric: took 2m24.4632827s to wait for apiserver process to appear ...
	I0915 04:13:03.739235   50332 api_server.go:86] waiting for apiserver healthz status ...
	I0915 04:13:03.750559   50332 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 04:13:06.095895   50332 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: (2.3453442s)
	I0915 04:13:06.096232   50332 logs.go:270] 1 containers: [8de9faa1fcfa]
	I0915 04:13:06.111475   50332 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 04:13:04.967019   54608 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: (1.4495669s)
	I0915 04:13:04.967455   54608 logs.go:270] 1 containers: [9eef4f15a96f]
	I0915 04:13:04.987552   54608 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 04:13:06.149892   54608 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: (1.1623445s)
	I0915 04:13:06.150246   54608 logs.go:270] 1 containers: [b039660af93e]
	I0915 04:13:06.152777   54608 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 04:13:07.396926   54608 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: (1.2441538s)
	I0915 04:13:07.397095   54608 logs.go:270] 1 containers: [265438529b0d]
	I0915 04:13:07.424939   54608 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 04:13:06.472104   25180 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.2-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0915 04:13:08.890055   25180 addons.go:337] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0915 04:13:08.890055   25180 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0915 04:13:07.847984   50332 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: (1.7363553s)
	I0915 04:13:07.848088   50332 logs.go:270] 1 containers: [5795606a4eee]
	I0915 04:13:07.855026   50332 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 04:13:09.373790   50332 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: (1.5187687s)
	I0915 04:13:09.374118   50332 logs.go:270] 1 containers: [7c72ae242581]
	I0915 04:13:09.376233   50332 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 04:13:10.435342   50332 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: (1.0591124s)
	I0915 04:13:10.435342   50332 logs.go:270] 1 containers: [739aa7fb4f15]
	I0915 04:13:10.435948   50332 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 04:13:09.406033   54608 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: (1.9811011s)
	I0915 04:13:09.406033   54608 logs.go:270] 1 containers: [7b4900acc0f2]
	I0915 04:13:09.419610   54608 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 04:13:10.830615   54608 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: (1.4108787s)
	I0915 04:13:10.830743   54608 logs.go:270] 1 containers: [b39a4e1f4e1a]
	I0915 04:13:10.849457   54608 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0915 04:13:12.021966   54608 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}: (1.172513s)
	I0915 04:13:12.022078   54608 logs.go:270] 1 containers: [c96424713549]
	I0915 04:13:12.032461   54608 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 04:13:13.298867   54608 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: (1.2662576s)
	I0915 04:13:13.298975   54608 logs.go:270] 1 containers: [a966dd345dfd]
	I0915 04:13:13.321051   54608 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 04:13:13.394967   25180 addons.go:337] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0915 04:13:13.394967   25180 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0915 04:13:16.003764   25180 addons.go:337] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0915 04:13:16.003764   25180 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0915 04:13:12.602215   50332 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: (2.1662746s)
	I0915 04:13:12.602215   50332 logs.go:270] 1 containers: [c690df0ad35a]
	I0915 04:13:12.628898   50332 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0915 04:13:14.474748   50332 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}: (1.8458567s)
	I0915 04:13:14.474748   50332 logs.go:270] 1 containers: [3fa484c8347e]
	I0915 04:13:14.485178   50332 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 04:13:15.585193   50332 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: (1.0998176s)
	I0915 04:13:15.585476   50332 logs.go:270] 1 containers: [86f3b30e297b]
	I0915 04:13:15.598381   50332 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 04:13:15.520963   54608 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: (2.1997021s)
	I0915 04:13:15.521328   54608 logs.go:270] 2 containers: [62008d343546 a4fcce6eda45]
	I0915 04:13:15.521328   54608 logs.go:123] Gathering logs for kube-controller-manager [a4fcce6eda45] ...
	I0915 04:13:15.521328   54608 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 a4fcce6eda45"
	I0915 04:13:17.140981   54608 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 a4fcce6eda45": (1.619659s)
	I0915 04:13:17.167376   54608 logs.go:123] Gathering logs for container status ...
	I0915 04:13:17.167376   54608 ssh_runner.go:152] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 04:13:20.318108   25180 addons.go:337] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0915 04:13:20.318108   25180 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0915 04:13:16.649763   50332 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: (1.0511183s)
	I0915 04:13:16.650110   50332 logs.go:270] 2 containers: [5fc3193ec4b2 562b32868c8b]
	I0915 04:13:16.650110   50332 logs.go:123] Gathering logs for etcd [5795606a4eee] ...
	I0915 04:13:16.650110   50332 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 5795606a4eee"
	I0915 04:13:18.066920   50332 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 5795606a4eee": (1.4168146s)
	I0915 04:13:18.171796   50332 logs.go:123] Gathering logs for kube-proxy [c690df0ad35a] ...
	I0915 04:13:18.171796   50332 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 c690df0ad35a"
	I0915 04:13:19.038510   50332 logs.go:123] Gathering logs for kubelet ...
	I0915 04:13:19.038510   50332 ssh_runner.go:152] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 04:13:19.990608   50332 logs.go:123] Gathering logs for describe nodes ...
	I0915 04:13:19.990608   50332 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 04:13:19.058603   54608 ssh_runner.go:192] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (1.8912338s)
	I0915 04:13:19.058603   54608 logs.go:123] Gathering logs for kubelet ...
	I0915 04:13:19.058603   54608 ssh_runner.go:152] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 04:13:20.199135   54608 ssh_runner.go:192] Completed: /bin/bash -c "sudo journalctl -u kubelet -n 400": (1.1402447s)
	I0915 04:13:20.327165   54608 logs.go:123] Gathering logs for dmesg ...
	I0915 04:13:20.327165   54608 ssh_runner.go:152] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 04:13:20.994671   54608 logs.go:123] Gathering logs for describe nodes ...
	I0915 04:13:20.994671   54608 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 04:13:21.116984   25180 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.2-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (30.8220647s)
	I0915 04:13:22.998276   25180 addons.go:337] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0915 04:13:22.998530   25180 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0915 04:13:24.452756   50332 ssh_runner.go:192] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (4.4621631s)
	I0915 04:13:24.471567   50332 logs.go:123] Gathering logs for storage-provisioner [86f3b30e297b] ...
	I0915 04:13:24.471804   50332 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 86f3b30e297b"
	I0915 04:13:26.274513   50332 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 86f3b30e297b": (1.8025202s)
	I0915 04:13:26.275560   50332 logs.go:123] Gathering logs for Docker ...
	I0915 04:13:26.275920   50332 ssh_runner.go:152] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0915 04:13:24.301445   54608 ssh_runner.go:192] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (3.3067859s)
	I0915 04:13:24.312653   54608 logs.go:123] Gathering logs for etcd [b039660af93e] ...
	I0915 04:13:24.312653   54608 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 b039660af93e"
	I0915 04:13:27.043206   54608 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 b039660af93e": (2.7304248s)
	I0915 04:13:27.119115   54608 logs.go:123] Gathering logs for kube-proxy [b39a4e1f4e1a] ...
	I0915 04:13:27.119115   54608 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 b39a4e1f4e1a"
	I0915 04:13:27.987805   54608 logs.go:123] Gathering logs for Docker ...
	I0915 04:13:27.988078   54608 ssh_runner.go:152] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0915 04:13:28.298740   54608 logs.go:123] Gathering logs for kube-scheduler [7b4900acc0f2] ...
	I0915 04:13:28.298945   54608 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 7b4900acc0f2"
	I0915 04:13:26.967042   25180 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.2-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0915 04:13:30.487900   25180 ssh_runner.go:192] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (26.4920733s)
	I0915 04:13:30.488187   25180 api_server.go:70] duration metric: took 49.5408591s to wait for apiserver process to appear ...
	I0915 04:13:30.488187   25180 api_server.go:86] waiting for apiserver healthz status ...
	I0915 04:13:30.488187   25180 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:59575/healthz ...
	I0915 04:13:30.512277   25180 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.2-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (35.767178s)
	I0915 04:13:30.653735   25180 api_server.go:265] https://127.0.0.1:59575/healthz returned 200:
	ok
	I0915 04:13:30.673255   25180 api_server.go:139] control plane version: v1.22.2-rc.0
	I0915 04:13:30.673354   25180 api_server.go:129] duration metric: took 185.1674ms to wait for apiserver health ...
	I0915 04:13:30.673354   25180 system_pods.go:43] waiting for kube-system pods to appear ...
	I0915 04:13:30.788442   25180 system_pods.go:59] 8 kube-system pods found
	I0915 04:13:30.788673   25180 system_pods.go:61] "coredns-78fcd69978-d9rxt" [797e8547-2434-4314-85a0-05684a4dff7d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0915 04:13:30.788673   25180 system_pods.go:61] "etcd-newest-cni-20210915040258-22140" [46a24220-ff27-41ee-8fe6-7b8bd12a34d0] Running
	I0915 04:13:30.788673   25180 system_pods.go:61] "kube-apiserver-newest-cni-20210915040258-22140" [b202b335-37b0-4812-93cd-83ec697e4714] Running
	I0915 04:13:30.788673   25180 system_pods.go:61] "kube-controller-manager-newest-cni-20210915040258-22140" [dd215459-34ba-4fea-baa3-600daa15f30d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0915 04:13:30.788886   25180 system_pods.go:61] "kube-proxy-kqwtr" [6f658bfb-d105-4855-bfec-a8a2ca888629] Running
	I0915 04:13:30.788886   25180 system_pods.go:61] "kube-scheduler-newest-cni-20210915040258-22140" [070752a7-0be4-43f1-955e-24562cff6675] Running
	I0915 04:13:30.788886   25180 system_pods.go:61] "metrics-server-7c784ccb57-jr9df" [ba2e9edd-4120-4ee1-a671-ffa1f9a6d8b4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0915 04:13:30.788886   25180 system_pods.go:61] "storage-provisioner" [f23ed07a-c5da-4df9-afd7-222a1872a30d] Running
	I0915 04:13:30.788886   25180 system_pods.go:74] duration metric: took 115.5325ms to wait for pod list to return data ...
	I0915 04:13:30.789068   25180 default_sa.go:34] waiting for default service account to be created ...
	I0915 04:13:30.828257   25180 default_sa.go:45] found service account: "default"
	I0915 04:13:30.828257   25180 default_sa.go:55] duration metric: took 39.1898ms for default service account to be created ...
	I0915 04:13:30.828257   25180 kubeadm.go:547] duration metric: took 49.8809307s to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0915 04:13:30.828257   25180 node_conditions.go:102] verifying NodePressure condition ...
	I0915 04:13:30.860013   25180 node_conditions.go:122] node storage ephemeral capacity is 65792556Ki
	I0915 04:13:30.860013   25180 node_conditions.go:123] node cpu capacity is 4
	I0915 04:13:30.860013   25180 node_conditions.go:105] duration metric: took 31.7555ms to run NodePressure ...
	I0915 04:13:30.860013   25180 start.go:231] waiting for startup goroutines ...
	I0915 04:13:26.689625   50332 logs.go:123] Gathering logs for coredns [7c72ae242581] ...
	I0915 04:13:26.689625   50332 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 7c72ae242581"
	I0915 04:13:28.012258   50332 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 7c72ae242581": (1.3226377s)
	I0915 04:13:28.012674   50332 logs.go:123] Gathering logs for kube-scheduler [739aa7fb4f15] ...
	I0915 04:13:28.012797   50332 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 739aa7fb4f15"
	I0915 04:13:28.807180   50332 logs.go:123] Gathering logs for kubernetes-dashboard [3fa484c8347e] ...
	I0915 04:13:28.807769   50332 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 3fa484c8347e"
	I0915 04:13:29.865039   50332 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 3fa484c8347e": (1.0572736s)
	I0915 04:13:29.866059   50332 logs.go:123] Gathering logs for kube-controller-manager [5fc3193ec4b2] ...
	I0915 04:13:29.866059   50332 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 5fc3193ec4b2"
	I0915 04:13:30.826533   50332 logs.go:123] Gathering logs for kube-controller-manager [562b32868c8b] ...
	I0915 04:13:30.826533   50332 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 562b32868c8b"
	I0915 04:13:29.610851   54608 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 7b4900acc0f2": (1.3119105s)
	I0915 04:13:29.624185   54608 logs.go:123] Gathering logs for storage-provisioner [a966dd345dfd] ...
	I0915 04:13:29.624401   54608 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 a966dd345dfd"
	I0915 04:13:30.405903   54608 logs.go:123] Gathering logs for kube-apiserver [9eef4f15a96f] ...
	I0915 04:13:30.406240   54608 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 9eef4f15a96f"
	I0915 04:13:32.119406   54608 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 9eef4f15a96f": (1.7131725s)
	I0915 04:13:32.161965   54608 logs.go:123] Gathering logs for coredns [265438529b0d] ...
	I0915 04:13:32.161965   54608 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 265438529b0d"
	I0915 04:13:33.699860   54608 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 265438529b0d": (1.5377108s)
	I0915 04:13:33.700364   54608 logs.go:123] Gathering logs for kubernetes-dashboard [c96424713549] ...
	I0915 04:13:33.700515   54608 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 c96424713549"
	I0915 04:13:32.404486   25180 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.2-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (25.9260837s)
	I0915 04:13:32.404875   25180 addons.go:375] Verifying addon metrics-server=true in "newest-cni-20210915040258-22140"
	I0915 04:13:32.615880   50332 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 562b32868c8b": (1.7893532s)
	I0915 04:13:32.642921   50332 logs.go:123] Gathering logs for dmesg ...
	I0915 04:13:32.642921   50332 ssh_runner.go:152] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 04:13:32.837980   50332 logs.go:123] Gathering logs for kube-apiserver [8de9faa1fcfa] ...
	I0915 04:13:32.837980   50332 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 8de9faa1fcfa"
	I0915 04:13:34.452727   50332 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 8de9faa1fcfa": (1.614753s)
	I0915 04:13:34.489088   50332 logs.go:123] Gathering logs for container status ...
	I0915 04:13:34.489088   50332 ssh_runner.go:152] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 04:13:35.814729   50332 ssh_runner.go:192] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (1.3256454s)
	I0915 04:13:35.525414   54608 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 c96424713549": (1.8249052s)
	I0915 04:13:35.534109   54608 logs.go:123] Gathering logs for kube-controller-manager [62008d343546] ...
	I0915 04:13:35.534109   54608 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 62008d343546"
	I0915 04:13:36.930864   54608 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 62008d343546": (1.3967602s)
	I0915 04:13:39.897999   54608 system_pods.go:59] 8 kube-system pods found
	I0915 04:13:39.898111   54608 system_pods.go:61] "coredns-78fcd69978-bn7fz" [46c98906-6e7a-4fd0-861c-fb65a4c5869e] Running
	I0915 04:13:39.898111   54608 system_pods.go:61] "etcd-embed-certs-20210915034625-22140" [3037bebc-5f6a-4c8d-ac6e-867222b64a56] Running
	I0915 04:13:39.898111   54608 system_pods.go:61] "kube-apiserver-embed-certs-20210915034625-22140" [d82c2c23-5f11-4b78-9a41-a85212161f58] Running
	I0915 04:13:39.898111   54608 system_pods.go:61] "kube-controller-manager-embed-certs-20210915034625-22140" [3e350070-0740-4c05-b830-4f3b34ffdd17] Running
	I0915 04:13:39.898111   54608 system_pods.go:61] "kube-proxy-z7cv6" [c3b22d2f-340b-42ce-bd3a-895de3c10507] Running
	I0915 04:13:39.898111   54608 system_pods.go:61] "kube-scheduler-embed-certs-20210915034625-22140" [a46564d6-7291-4b45-ac01-656391736999] Running
	I0915 04:13:39.898111   54608 system_pods.go:61] "metrics-server-7c784ccb57-mfxmp" [b19447ad-da08-4bc7-88eb-77e01fc10bef] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0915 04:13:39.898111   54608 system_pods.go:61] "storage-provisioner" [d7bfc2a4-5961-4cd6-853a-4b9f8524bcf5] Running
	I0915 04:13:39.898111   54608 system_pods.go:74] duration metric: took 36.3897781s to wait for pod list to return data ...
	I0915 04:13:39.898111   54608 default_sa.go:34] waiting for default service account to be created ...
	I0915 04:13:39.940572   54608 default_sa.go:45] found service account: "default"
	I0915 04:13:39.940572   54608 default_sa.go:55] duration metric: took 42.461ms for default service account to be created ...
	I0915 04:13:39.940572   54608 system_pods.go:116] waiting for k8s-apps to be running ...
	I0915 04:13:40.021202   54608 system_pods.go:86] 8 kube-system pods found
	I0915 04:13:40.021202   54608 system_pods.go:89] "coredns-78fcd69978-bn7fz" [46c98906-6e7a-4fd0-861c-fb65a4c5869e] Running
	I0915 04:13:40.021202   54608 system_pods.go:89] "etcd-embed-certs-20210915034625-22140" [3037bebc-5f6a-4c8d-ac6e-867222b64a56] Running
	I0915 04:13:40.021202   54608 system_pods.go:89] "kube-apiserver-embed-certs-20210915034625-22140" [d82c2c23-5f11-4b78-9a41-a85212161f58] Running
	I0915 04:13:40.021400   54608 system_pods.go:89] "kube-controller-manager-embed-certs-20210915034625-22140" [3e350070-0740-4c05-b830-4f3b34ffdd17] Running
	I0915 04:13:40.021400   54608 system_pods.go:89] "kube-proxy-z7cv6" [c3b22d2f-340b-42ce-bd3a-895de3c10507] Running
	I0915 04:13:40.021400   54608 system_pods.go:89] "kube-scheduler-embed-certs-20210915034625-22140" [a46564d6-7291-4b45-ac01-656391736999] Running
	I0915 04:13:40.021400   54608 system_pods.go:89] "metrics-server-7c784ccb57-mfxmp" [b19447ad-da08-4bc7-88eb-77e01fc10bef] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0915 04:13:40.021400   54608 system_pods.go:89] "storage-provisioner" [d7bfc2a4-5961-4cd6-853a-4b9f8524bcf5] Running
	I0915 04:13:40.021400   54608 system_pods.go:126] duration metric: took 80.8283ms to wait for k8s-apps to be running ...
	I0915 04:13:40.021400   54608 system_svc.go:44] waiting for kubelet service to be running ....
	I0915 04:13:40.039107   54608 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
	I0915 04:13:40.397715   54608 system_svc.go:56] duration metric: took 376.3163ms WaitForService to wait for kubelet.
	I0915 04:13:40.397957   54608 kubeadm.go:547] duration metric: took 3m58.0113133s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0915 04:13:40.397957   54608 node_conditions.go:102] verifying NodePressure condition ...
	I0915 04:13:40.459192   54608 node_conditions.go:122] node storage ephemeral capacity is 65792556Ki
	I0915 04:13:40.459192   54608 node_conditions.go:123] node cpu capacity is 4
	I0915 04:13:40.459192   54608 node_conditions.go:105] duration metric: took 61.2353ms to run NodePressure ...
	I0915 04:13:40.459544   54608 start.go:231] waiting for startup goroutines ...
	I0915 04:13:40.740556   54608 start.go:462] kubectl: 1.20.0, cluster: 1.22.1 (minor skew: 2)
	I0915 04:13:40.743269   54608 out.go:177] 
	W0915 04:13:40.744570   54608 out.go:242] ! C:\Program Files\Docker\Docker\resources\bin\kubectl.exe is version 1.20.0, which may have incompatibilites with Kubernetes 1.22.1.
	I0915 04:13:40.748307   54608 out.go:177]   - Want kubectl v1.22.1? Try 'minikube kubectl -- get pods -A'
	I0915 04:13:40.756390   54608 out.go:177] * Done! kubectl is now configured to use "embed-certs-20210915034625-22140" cluster and "default" namespace by default
	I0915 04:13:38.316062   50332 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:59349/healthz ...
	I0915 04:13:38.386567   50332 api_server.go:265] https://127.0.0.1:59349/healthz returned 200:
	ok
	I0915 04:13:38.422914   50332 api_server.go:139] control plane version: v1.22.1
	I0915 04:13:38.422914   50332 api_server.go:129] duration metric: took 34.6838012s to wait for apiserver health ...
	I0915 04:13:38.422914   50332 system_pods.go:43] waiting for kube-system pods to appear ...
	I0915 04:13:38.438228   50332 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 04:13:39.201137   50332 logs.go:270] 1 containers: [8de9faa1fcfa]
	I0915 04:13:39.220508   50332 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 04:13:40.530254   50332 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: (1.3097511s)
	I0915 04:13:40.530254   50332 logs.go:270] 1 containers: [5795606a4eee]
	I0915 04:13:40.543390   50332 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 04:13:43.592740   25180 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.2-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (16.6257564s)
	I0915 04:13:43.596109   25180 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0915 04:13:43.596400   25180 addons.go:406] enableAddons completed in 1m2.6488986s
	I0915 04:13:43.875472   25180 start.go:462] kubectl: 1.20.0, cluster: 1.22.2-rc.0 (minor skew: 2)
	I0915 04:13:43.878197   25180 out.go:177] 
	W0915 04:13:43.879222   25180 out.go:242] ! C:\Program Files\Docker\Docker\resources\bin\kubectl.exe is version 1.20.0, which may have incompatibilites with Kubernetes 1.22.2-rc.0.
	I0915 04:13:43.882195   25180 out.go:177]   - Want kubectl v1.22.2-rc.0? Try 'minikube kubectl -- get pods -A'
	I0915 04:13:43.890213   25180 out.go:177] * Done! kubectl is now configured to use "newest-cni-20210915040258-22140" cluster and "default" namespace by default
	I0915 04:13:41.806012   50332 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: (1.2626262s)
	I0915 04:13:41.806420   50332 logs.go:270] 1 containers: [7c72ae242581]
	I0915 04:13:41.810940   50332 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 04:13:42.827463   50332 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: (1.0165264s)
	I0915 04:13:42.827463   50332 logs.go:270] 1 containers: [739aa7fb4f15]
	I0915 04:13:42.839644   50332 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 04:13:43.805178   50332 logs.go:270] 1 containers: [c690df0ad35a]
	I0915 04:13:43.816180   50332 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0915 04:13:45.859029   50332 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}: (2.0428563s)
	I0915 04:13:45.859211   50332 logs.go:270] 1 containers: [3fa484c8347e]
	I0915 04:13:45.872652   50332 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 04:13:46.333061   24768 ssh_runner.go:192] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.2-rc.0:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force": (1m43.2358136s)
	I0915 04:13:46.352416   24768 ssh_runner.go:152] Run: sudo systemctl stop -f kubelet
	I0915 04:13:46.476345   24768 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0915 04:13:46.897436   24768 ssh_runner.go:152] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0915 04:13:46.967431   24768 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0915 04:13:46.986429   24768 ssh_runner.go:152] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0915 04:13:47.105704   24768 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0915 04:13:47.107544   24768 ssh_runner.go:243] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.2-rc.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0915 04:13:47.685996   50332 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: (1.813351s)
	I0915 04:13:47.685996   50332 logs.go:270] 1 containers: [86f3b30e297b]
	I0915 04:13:47.695057   50332 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 04:13:49.126777   50332 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: (1.4317244s)
	I0915 04:13:49.126926   50332 logs.go:270] 2 containers: [5fc3193ec4b2 562b32868c8b]
	I0915 04:13:49.126926   50332 logs.go:123] Gathering logs for kube-scheduler [739aa7fb4f15] ...
	I0915 04:13:49.126926   50332 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 739aa7fb4f15"
	I0915 04:13:51.256919   50332 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 739aa7fb4f15": (2.130001s)
	I0915 04:13:51.280090   50332 logs.go:123] Gathering logs for container status ...
	I0915 04:13:51.280090   50332 ssh_runner.go:152] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 04:13:52.779995   50332 ssh_runner.go:192] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (1.4999101s)
	I0915 04:13:52.779995   50332 logs.go:123] Gathering logs for kube-apiserver [8de9faa1fcfa] ...
	I0915 04:13:52.779995   50332 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 8de9faa1fcfa"
	I0915 04:13:55.823882   50332 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 8de9faa1fcfa": (3.043897s)
	I0915 04:13:55.858260   50332 logs.go:123] Gathering logs for etcd [5795606a4eee] ...
	I0915 04:13:55.858260   50332 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 5795606a4eee"
	I0915 04:13:57.393231   50332 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 5795606a4eee": (1.5349768s)
	I0915 04:13:57.498634   50332 logs.go:123] Gathering logs for kube-controller-manager [5fc3193ec4b2] ...
	I0915 04:13:57.498634   50332 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 5fc3193ec4b2"
	I0915 04:13:59.805503   50332 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 5fc3193ec4b2": (2.3068778s)
	I0915 04:13:59.826489   50332 logs.go:123] Gathering logs for kube-controller-manager [562b32868c8b] ...
	I0915 04:13:59.826489   50332 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 562b32868c8b"
	I0915 04:14:03.808671   50332 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 562b32868c8b": (3.9821957s)
	I0915 04:14:03.831668   50332 logs.go:123] Gathering logs for coredns [7c72ae242581] ...
	I0915 04:14:03.831668   50332 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 7c72ae242581"
	I0915 04:14:05.822625   50332 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 7c72ae242581": (1.9909636s)
	I0915 04:14:05.822625   50332 logs.go:123] Gathering logs for storage-provisioner [86f3b30e297b] ...
	I0915 04:14:05.822625   50332 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 86f3b30e297b"
	I0915 04:14:07.695328   50332 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 86f3b30e297b": (1.8725336s)
	I0915 04:14:07.696046   50332 logs.go:123] Gathering logs for describe nodes ...
	I0915 04:14:07.696046   50332 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 04:14:11.615851   50332 ssh_runner.go:192] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (3.9196614s)
	I0915 04:14:11.620416   50332 logs.go:123] Gathering logs for kubelet ...
	I0915 04:14:11.620416   50332 ssh_runner.go:152] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 04:14:12.335168   50332 logs.go:123] Gathering logs for dmesg ...
	I0915 04:14:12.335168   50332 ssh_runner.go:152] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 04:14:12.744093   50332 logs.go:123] Gathering logs for Docker ...
	I0915 04:14:12.744093   50332 ssh_runner.go:152] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0915 04:14:13.165431   50332 logs.go:123] Gathering logs for kube-proxy [c690df0ad35a] ...
	I0915 04:14:13.165431   50332 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 c690df0ad35a"
	I0915 04:14:14.300715   50332 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 c690df0ad35a": (1.1352881s)
	I0915 04:14:14.304277   50332 logs.go:123] Gathering logs for kubernetes-dashboard [3fa484c8347e] ...
	I0915 04:14:14.304516   50332 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 3fa484c8347e"
	I0915 04:14:15.513684   50332 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 3fa484c8347e": (1.2091724s)
	I0915 04:14:18.142853   50332 system_pods.go:59] 8 kube-system pods found
	I0915 04:14:18.142853   50332 system_pods.go:61] "coredns-78fcd69978-2ds62" [ed6b80e5-a87c-41ac-b092-a33f5a4b14e7] Running
	I0915 04:14:18.142853   50332 system_pods.go:61] "etcd-default-k8s-different-port-20210915034637-22140" [842701c5-6d85-4b56-8e2c-28a4b07f4741] Running
	I0915 04:14:18.142853   50332 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20210915034637-22140" [08c58be8-98a9-4b38-88ab-83ebbb0f82ad] Running
	I0915 04:14:18.142853   50332 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20210915034637-22140" [706789a3-5904-452f-8146-3b97bd579e7a] Running
	I0915 04:14:18.142853   50332 system_pods.go:61] "kube-proxy-2dt6m" [a8d37509-2a77-4d3c-a3ec-069b5fa51377] Running
	I0915 04:14:18.142853   50332 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20210915034637-22140" [3e0b367d-7c9c-4066-b92c-73d7eb2c3ca5] Running
	I0915 04:14:18.142853   50332 system_pods.go:61] "metrics-server-7c784ccb57-p77xb" [b5231c7e-1d45-4902-8b27-b784ad6f1d33] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0915 04:14:18.142853   50332 system_pods.go:61] "storage-provisioner" [1c7f0e66-d3aa-4a3f-8a08-a8ea9a51e19c] Running
	I0915 04:14:18.142853   50332 system_pods.go:74] duration metric: took 39.7199175s to wait for pod list to return data ...
	I0915 04:14:18.142853   50332 default_sa.go:34] waiting for default service account to be created ...
	I0915 04:14:18.167128   50332 default_sa.go:45] found service account: "default"
	I0915 04:14:18.167346   50332 default_sa.go:55] duration metric: took 24.4933ms for default service account to be created ...
	I0915 04:14:18.167346   50332 system_pods.go:116] waiting for k8s-apps to be running ...
	I0915 04:14:18.424097   50332 system_pods.go:86] 8 kube-system pods found
	I0915 04:14:18.424097   50332 system_pods.go:89] "coredns-78fcd69978-2ds62" [ed6b80e5-a87c-41ac-b092-a33f5a4b14e7] Running
	I0915 04:14:18.424097   50332 system_pods.go:89] "etcd-default-k8s-different-port-20210915034637-22140" [842701c5-6d85-4b56-8e2c-28a4b07f4741] Running
	I0915 04:14:18.424097   50332 system_pods.go:89] "kube-apiserver-default-k8s-different-port-20210915034637-22140" [08c58be8-98a9-4b38-88ab-83ebbb0f82ad] Running
	I0915 04:14:18.424097   50332 system_pods.go:89] "kube-controller-manager-default-k8s-different-port-20210915034637-22140" [706789a3-5904-452f-8146-3b97bd579e7a] Running
	I0915 04:14:18.424097   50332 system_pods.go:89] "kube-proxy-2dt6m" [a8d37509-2a77-4d3c-a3ec-069b5fa51377] Running
	I0915 04:14:18.424097   50332 system_pods.go:89] "kube-scheduler-default-k8s-different-port-20210915034637-22140" [3e0b367d-7c9c-4066-b92c-73d7eb2c3ca5] Running
	I0915 04:14:18.424097   50332 system_pods.go:89] "metrics-server-7c784ccb57-p77xb" [b5231c7e-1d45-4902-8b27-b784ad6f1d33] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0915 04:14:18.424097   50332 system_pods.go:89] "storage-provisioner" [1c7f0e66-d3aa-4a3f-8a08-a8ea9a51e19c] Running
	I0915 04:14:18.424097   50332 system_pods.go:126] duration metric: took 256.7526ms to wait for k8s-apps to be running ...
	I0915 04:14:18.424097   50332 system_svc.go:44] waiting for kubelet service to be running ....
	I0915 04:14:18.439780   50332 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
	I0915 04:14:19.011667   50332 system_svc.go:56] duration metric: took 587.5712ms WaitForService to wait for kubelet.
	I0915 04:14:19.011667   50332 kubeadm.go:547] duration metric: took 3m39.7362395s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0915 04:14:19.011667   50332 node_conditions.go:102] verifying NodePressure condition ...
	I0915 04:14:19.053342   50332 node_conditions.go:122] node storage ephemeral capacity is 65792556Ki
	I0915 04:14:19.053342   50332 node_conditions.go:123] node cpu capacity is 4
	I0915 04:14:19.053342   50332 node_conditions.go:105] duration metric: took 41.6752ms to run NodePressure ...
	I0915 04:14:19.053342   50332 start.go:231] waiting for startup goroutines ...
	I0915 04:14:19.268078   50332 start.go:462] kubectl: 1.20.0, cluster: 1.22.1 (minor skew: 2)
	I0915 04:14:19.270105   50332 out.go:177] 
	W0915 04:14:19.271181   50332 out.go:242] ! C:\Program Files\Docker\Docker\resources\bin\kubectl.exe is version 1.20.0, which may have incompatibilites with Kubernetes 1.22.1.
	I0915 04:14:19.273081   50332 out.go:177]   - Want kubectl v1.22.1? Try 'minikube kubectl -- get pods -A'
	I0915 04:14:19.276213   50332 out.go:177] * Done! kubectl is now configured to use "default-k8s-different-port-20210915034637-22140" cluster and "default" namespace by default
	I0915 04:15:31.047401   24768 out.go:204]   - Generating certificates and keys ...
	I0915 04:15:31.059304   24768 out.go:204]   - Booting up control plane ...
	I0915 04:15:31.064316   24768 out.go:204]   - Configuring RBAC rules ...
	I0915 04:15:31.068334   24768 cni.go:93] Creating CNI manager for ""
	I0915 04:15:31.068334   24768 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0915 04:15:31.074305   24768 ssh_runner.go:152] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0915 04:15:31.077358   24768 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 04:15:31.077358   24768 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2-rc.0/kubectl label nodes minikube.k8s.io/version=v1.23.0 minikube.k8s.io/commit=7d234465a435c40d154c10f5ac847cc10f4e5fc3 minikube.k8s.io/name=no-preload-20210915034542-22140 minikube.k8s.io/updated_at=2021_09_15T04_15_31_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 04:15:32.021740   24768 ops.go:34] apiserver oom_adj: -16
	I0915 04:15:33.248031   24768 ssh_runner.go:192] Completed: sudo /var/lib/minikube/binaries/v1.22.2-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig: (2.17068s)
	I0915 04:15:33.271350   24768 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 04:15:35.911652   24768 ssh_runner.go:192] Completed: sudo /var/lib/minikube/binaries/v1.22.2-rc.0/kubectl label nodes minikube.k8s.io/version=v1.23.0 minikube.k8s.io/commit=7d234465a435c40d154c10f5ac847cc10f4e5fc3 minikube.k8s.io/name=no-preload-20210915034542-22140 minikube.k8s.io/updated_at=2021_09_15T04_15_31_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig: (4.8343099s)
	I0915 04:15:36.232159   24768 ssh_runner.go:192] Completed: sudo /var/lib/minikube/binaries/v1.22.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (2.9605509s)
	I0915 04:15:36.743796   24768 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 04:15:38.317375   24768 ssh_runner.go:192] Completed: sudo /var/lib/minikube/binaries/v1.22.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (1.5735841s)
	I0915 04:15:38.750865   24768 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2021-09-15 04:04:34 UTC, end at Wed 2021-09-15 04:16:01 UTC. --
	Sep 15 04:09:03 no-preload-20210915034542-22140 dockerd[214]: time="2021-09-15T04:09:03.878825000Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Sep 15 04:10:25 no-preload-20210915034542-22140 dockerd[214]: time="2021-09-15T04:10:25.374198300Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Sep 15 04:10:25 no-preload-20210915034542-22140 dockerd[214]: time="2021-09-15T04:10:25.377291300Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Sep 15 04:10:25 no-preload-20210915034542-22140 dockerd[214]: time="2021-09-15T04:10:25.397660200Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Sep 15 04:12:10 no-preload-20210915034542-22140 dockerd[214]: time="2021-09-15T04:12:10.982705700Z" level=info msg="ignoring event" container=2e04b46d16ee49ac62298be78a4313f4836e68fa9543e26ec2e1cdbc5d1095fe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 04:12:22 no-preload-20210915034542-22140 dockerd[214]: time="2021-09-15T04:12:22.578013000Z" level=info msg="Container 5399cdd2c7b293993a563d2945afb497c66cedb2f0a00561f22ad69496656d47 failed to exit within 10 seconds of signal 15 - using the force"
	Sep 15 04:12:23 no-preload-20210915034542-22140 dockerd[214]: time="2021-09-15T04:12:23.600153400Z" level=info msg="ignoring event" container=5399cdd2c7b293993a563d2945afb497c66cedb2f0a00561f22ad69496656d47 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 04:12:31 no-preload-20210915034542-22140 dockerd[214]: time="2021-09-15T04:12:31.209565900Z" level=info msg="ignoring event" container=11c926e5227573b2a0ab10c05082e46ed12b3cbded9ecb776420ed58366739d4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 04:12:35 no-preload-20210915034542-22140 dockerd[214]: time="2021-09-15T04:12:35.605348500Z" level=info msg="ignoring event" container=4cb5dc3a860bc35f6381f763b0bd9f6a02ab4c363aeeb6de0a9d2a5f67ec54b3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 04:12:38 no-preload-20210915034542-22140 dockerd[214]: time="2021-09-15T04:12:38.106585500Z" level=info msg="ignoring event" container=db589898d45fb406e7e208084f81683697baeb658052db0d0fbe4dd23d4a2390 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 04:12:41 no-preload-20210915034542-22140 dockerd[214]: time="2021-09-15T04:12:41.545388500Z" level=info msg="ignoring event" container=33b07c2ae57ac423ad30cfd20e6cc52571d5f4eb126e57a0c4e472012e91d8b4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 04:12:49 no-preload-20210915034542-22140 dockerd[214]: time="2021-09-15T04:12:49.798244400Z" level=info msg="ignoring event" container=71e784ce3528971f63a89520786cbb6fcf455dda1f2f0084b502eef4ae46576a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 04:12:52 no-preload-20210915034542-22140 dockerd[214]: time="2021-09-15T04:12:52.898685300Z" level=info msg="ignoring event" container=92497d8b880242676c2a679bc15124f1e74b16fc477d967fead45ddff07818f1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 04:12:56 no-preload-20210915034542-22140 dockerd[214]: time="2021-09-15T04:12:56.649937100Z" level=info msg="ignoring event" container=390fc5c78ff1e49d6f2e2882dadb251c70e0d0506dccf948c1bc2e0065b0df9e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 04:13:00 no-preload-20210915034542-22140 dockerd[214]: time="2021-09-15T04:13:00.696929600Z" level=info msg="ignoring event" container=4445d0bc6a60f39c0c54b1073df8ccf73daf8d76dfaccadcc199404279b998ed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 04:13:14 no-preload-20210915034542-22140 dockerd[214]: time="2021-09-15T04:13:14.372182200Z" level=info msg="Container 08eb623d1caeb48fc923ba24ac3b1716aa7b83cecf52b7c07e5db6b5ed502367 failed to exit within 10 seconds of signal 15 - using the force"
	Sep 15 04:13:14 no-preload-20210915034542-22140 dockerd[214]: time="2021-09-15T04:13:14.994067800Z" level=info msg="ignoring event" container=08eb623d1caeb48fc923ba24ac3b1716aa7b83cecf52b7c07e5db6b5ed502367 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 04:13:18 no-preload-20210915034542-22140 dockerd[214]: time="2021-09-15T04:13:18.134213100Z" level=info msg="ignoring event" container=1a02c561c07549402482e7b4b8f4f49535e3aee62677ba6ac17dfa640cb9941f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 04:13:30 no-preload-20210915034542-22140 dockerd[214]: time="2021-09-15T04:13:30.609956500Z" level=info msg="Container 7a457ca0bdca36edb10bce967bce36bb358bd583a7fb3dda40e1108c0d891554 failed to exit within 10 seconds of signal 15 - using the force"
	Sep 15 04:13:31 no-preload-20210915034542-22140 dockerd[214]: time="2021-09-15T04:13:31.483615100Z" level=info msg="ignoring event" container=7a457ca0bdca36edb10bce967bce36bb358bd583a7fb3dda40e1108c0d891554 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 04:13:32 no-preload-20210915034542-22140 dockerd[214]: time="2021-09-15T04:13:32.855253100Z" level=info msg="ignoring event" container=0e48b54936043c652c27f00b129a384396f2ed4295ed5c27cc875f4acd76983e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 04:13:34 no-preload-20210915034542-22140 dockerd[214]: time="2021-09-15T04:13:34.117931400Z" level=info msg="ignoring event" container=7428777042b1605356b95ccb41f907889b78441a949783848a17432faf3da87b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 04:13:35 no-preload-20210915034542-22140 dockerd[214]: time="2021-09-15T04:13:35.304770600Z" level=info msg="ignoring event" container=b0008b06a45a47141d285ae18949a08fab05809b2e3cf7943db2e437c02e606d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 04:13:36 no-preload-20210915034542-22140 dockerd[214]: time="2021-09-15T04:13:36.583723800Z" level=info msg="ignoring event" container=0c95b68458c4eed81a2252e022911144877ecddf21a714853353a7e4846edae3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 04:15:08 no-preload-20210915034542-22140 dockerd[214]: time="2021-09-15T04:15:08.148581100Z" level=info msg="ignoring event" container=7e5b2579af0876f0f6a7c6c4dee0e9232b1ecbce704a00aeb99c5c9c9852e59f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	38d5146cd3710       8d147537fb7d1       5 seconds ago        Running             coredns                   0                   6af6fc3091f64
	2868bafa51a74       8d147537fb7d1       5 seconds ago        Running             coredns                   0                   16b17b2fdf116
	497edfc1359b8       b70be673718a1       8 seconds ago        Running             kube-proxy                0                   be468b4602d1b
	a8b7111440bc4       ebabbbe9d3231       51 seconds ago       Running             kube-controller-manager   5                   2c9a8b98229ad
	7e5b2579af087       ebabbbe9d3231       About a minute ago   Exited              kube-controller-manager   4                   2c9a8b98229ad
	7228bd0a6eeb7       1147a8b9229bd       About a minute ago   Running             kube-apiserver            2                   0588d144d74d8
	c461637196a17       0048118155842       About a minute ago   Running             etcd                      2                   2ee35be6c564e
	bfaa8d89f44a2       da7461484b41f       About a minute ago   Running             kube-scheduler            2                   47bc0bf076c67
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-20210915034542-22140
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-20210915034542-22140
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7d234465a435c40d154c10f5ac847cc10f4e5fc3
	                    minikube.k8s.io/name=no-preload-20210915034542-22140
	                    minikube.k8s.io/updated_at=2021_09_15T04_15_31_0700
	                    minikube.k8s.io/version=v1.23.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 15 Sep 2021 04:15:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-20210915034542-22140
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 15 Sep 2021 04:15:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 15 Sep 2021 04:15:48 +0000   Wed, 15 Sep 2021 04:15:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 15 Sep 2021 04:15:48 +0000   Wed, 15 Sep 2021 04:15:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 15 Sep 2021 04:15:48 +0000   Wed, 15 Sep 2021 04:15:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 15 Sep 2021 04:15:48 +0000   Wed, 15 Sep 2021 04:15:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-20210915034542-22140
	Capacity:
	  cpu:                4
	  ephemeral-storage:  65792556Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             20481980Ki
	  pods:               110
	Allocatable:
	  cpu:                4
	  ephemeral-storage:  65792556Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             20481980Ki
	  pods:               110
	System Info:
	  Machine ID:                 4b5e5cdd53d44f5ab575bb522d42acca
	  System UUID:                7a0cc8f4-385c-4c87-8226-8fdd41c19ad7
	  Boot ID:                    31a72c78-717c-4979-9c6b-d3a794aac31d
	  Kernel Version:             4.19.121-linuxkit
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.8
	  Kubelet Version:            v1.22.2-rc.0
	  Kube-Proxy Version:         v1.22.2-rc.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-78fcd69978-gx5vz                                   100m (2%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     16s
	  kube-system                 coredns-78fcd69978-s68wr                                   100m (2%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     15s
	  kube-system                 etcd-no-preload-20210915034542-22140                       100m (2%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         23s
	  kube-system                 kube-apiserver-no-preload-20210915034542-22140             250m (6%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	  kube-system                 kube-controller-manager-no-preload-20210915034542-22140    200m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	  kube-system                 kube-proxy-nwp6n                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16s
	  kube-system                 kube-scheduler-no-preload-20210915034542-22140             100m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (21%!)(MISSING)  0 (0%!)(MISSING)
	  memory             240Mi (1%!)(MISSING)  340Mi (1%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From     Message
	  ----    ------                   ----  ----     -------
	  Normal  Starting                 29s   kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  28s   kubelet  Node no-preload-20210915034542-22140 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28s   kubelet  Node no-preload-20210915034542-22140 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28s   kubelet  Node no-preload-20210915034542-22140 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             27s   kubelet  Node no-preload-20210915034542-22140 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  24s   kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeReady                16s   kubelet  Node no-preload-20210915034542-22140 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.000002]  ? ktime_get_update_offsets_now+0x36/0x95
	[  +0.000002]  hrtimer_interrupt+0x92/0x165
	[  +0.000003]  hv_stimer0_isr+0x20/0x2d
	[  +0.000007]  hv_stimer0_vector_handler+0x3b/0x57
	[  +0.000009]  hv_stimer0_callback_vector+0xf/0x20
	[  +0.000001]  </IRQ>
	[  +0.000001] RIP: 0010:native_safe_halt+0x7/0x8
	[  +0.000002] Code: 60 02 df f0 83 44 24 fc 00 48 8b 00 a8 08 74 0b 65 81 25 fd b5 6f 69 ff ff ff 7f c3 e8 77 ce 72 ff f4 c3 e8 70 ce 72 ff fb f4 <c3> 0f 1f 44 00 00 53 e8 f1 f5 81 ff 65 8b 35 b3 4b 6f 69 31 ff e8
	[  +0.000001] RSP: 0018:ffff98b6000a3ec8 EFLAGS: 00000246 ORIG_RAX: ffffffffffffff12
	[  +0.000001] RAX: ffffffff9691a410 RBX: 0000000000000001 RCX: ffffffff97253150
	[  +0.000001] RDX: 00000000001bfb3e RSI: 0000000000000001 RDI: 0000000000000001
	[  +0.000001] RBP: 0000000000000000 R08: 011cf099150136ab R09: 0000000000000002
	[  +0.000000] R10: ffff8b9f6df73938 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: ffff8b9fae19e1c0 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000002]  ? ldsem_down_write+0x1da/0x1da
	[  +0.000009]  ? native_safe_halt+0x5/0x8
	[  +0.000001]  default_idle+0x1b/0x2c
	[  +0.000001]  do_idle+0xe5/0x216
	[  +0.000002]  cpu_startup_entry+0x6f/0x71
	[  +0.000003]  start_secondary+0x18e/0x1a9
	[  +0.000006]  secondary_startup_64+0xa4/0xb0
	[  +0.000005] ---[ end trace f027fbf82db24e21 ]---
	[Sep15 03:23] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000013] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Sep15 03:36] tee (174639): /proc/173352/oom_adj is deprecated, please use /proc/173352/oom_score_adj instead.
	
	* 
	* ==> etcd [c461637196a1] <==
	* {"level":"info","ts":"2021-09-15T04:15:48.915Z","caller":"traceutil/trace.go:171","msg":"trace[567061058] transaction","detail":"{read_only:false; response_revision:405; number_of_response:1; }","duration":"141.5373ms","start":"2021-09-15T04:15:48.774Z","end":"2021-09-15T04:15:48.915Z","steps":["trace[567061058] 'process raft request'  (duration: 134.515ms)"],"step_count":1}
	{"level":"warn","ts":"2021-09-15T04:15:49.120Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"106.365ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722563405476796787 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/daemonsets/kube-system/kube-proxy\" mod_revision:338 > success:<request_put:<key:\"/registry/daemonsets/kube-system/kube-proxy\" value_size:2754 >> failure:<request_range:<key:\"/registry/daemonsets/kube-system/kube-proxy\" > >>","response":"size:16"}
	{"level":"info","ts":"2021-09-15T04:15:49.124Z","caller":"traceutil/trace.go:171","msg":"trace[2048108302] transaction","detail":"{read_only:false; response_revision:407; number_of_response:1; }","duration":"125.5944ms","start":"2021-09-15T04:15:48.999Z","end":"2021-09-15T04:15:49.124Z","steps":["trace[2048108302] 'process raft request'  (duration: 14.6127ms)","trace[2048108302] 'compare'  (duration: 106.2312ms)"],"step_count":2}
	{"level":"warn","ts":"2021-09-15T04:15:49.406Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"135.752ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722563405476796798 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/secrets/kube-public/default-token-mnhqj\" mod_revision:0 > success:<request_put:<key:\"/registry/secrets/kube-public/default-token-mnhqj\" value_size:2620 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2021-09-15T04:15:49.412Z","caller":"traceutil/trace.go:171","msg":"trace[1056928012] transaction","detail":"{read_only:false; response_revision:414; number_of_response:1; }","duration":"193.4541ms","start":"2021-09-15T04:15:49.218Z","end":"2021-09-15T04:15:49.412Z","steps":["trace[1056928012] 'process raft request'  (duration: 51.8508ms)","trace[1056928012] 'compare'  (duration: 135.6179ms)"],"step_count":2}
	{"level":"info","ts":"2021-09-15T04:15:49.413Z","caller":"traceutil/trace.go:171","msg":"trace[1577680462] transaction","detail":"{read_only:false; response_revision:415; number_of_response:1; }","duration":"193.0305ms","start":"2021-09-15T04:15:49.220Z","end":"2021-09-15T04:15:49.413Z","steps":["trace[1577680462] 'process raft request'  (duration: 186.6512ms)"],"step_count":1}
	{"level":"info","ts":"2021-09-15T04:15:49.414Z","caller":"traceutil/trace.go:171","msg":"trace[908238228] linearizableReadLoop","detail":"{readStateIndex:425; appliedIndex:423; }","duration":"146.7243ms","start":"2021-09-15T04:15:49.267Z","end":"2021-09-15T04:15:49.414Z","steps":["trace[908238228] 'read index received'  (duration: 3.3366ms)","trace[908238228] 'applied index is now lower than readState.Index'  (duration: 143.3854ms)"],"step_count":2}
	{"level":"warn","ts":"2021-09-15T04:15:49.414Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"146.8195ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas/kube-node-lease/\" range_end:\"/registry/resourcequotas/kube-node-lease0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2021-09-15T04:15:49.414Z","caller":"traceutil/trace.go:171","msg":"trace[2133280923] range","detail":"{range_begin:/registry/resourcequotas/kube-node-lease/; range_end:/registry/resourcequotas/kube-node-lease0; response_count:0; response_revision:419; }","duration":"146.9249ms","start":"2021-09-15T04:15:49.267Z","end":"2021-09-15T04:15:49.414Z","steps":["trace[2133280923] 'agreement among raft nodes before linearized reading'  (duration: 146.8175ms)"],"step_count":1}
	{"level":"info","ts":"2021-09-15T04:15:49.429Z","caller":"traceutil/trace.go:171","msg":"trace[685494366] transaction","detail":"{read_only:false; response_revision:416; number_of_response:1; }","duration":"157.6057ms","start":"2021-09-15T04:15:49.272Z","end":"2021-09-15T04:15:49.429Z","steps":["trace[685494366] 'process raft request'  (duration: 134.8719ms)"],"step_count":1}
	{"level":"info","ts":"2021-09-15T04:15:49.430Z","caller":"traceutil/trace.go:171","msg":"trace[1776512365] transaction","detail":"{read_only:false; response_revision:417; number_of_response:1; }","duration":"147.9028ms","start":"2021-09-15T04:15:49.282Z","end":"2021-09-15T04:15:49.430Z","steps":["trace[1776512365] 'process raft request'  (duration: 124.5146ms)"],"step_count":1}
	{"level":"info","ts":"2021-09-15T04:15:49.431Z","caller":"traceutil/trace.go:171","msg":"trace[225383145] transaction","detail":"{read_only:false; response_revision:418; number_of_response:1; }","duration":"133.6008ms","start":"2021-09-15T04:15:49.297Z","end":"2021-09-15T04:15:49.431Z","steps":["trace[225383145] 'process raft request'  (duration: 109.6796ms)"],"step_count":1}
	{"level":"warn","ts":"2021-09-15T04:15:49.821Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"132.2068ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722563405476796808 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-no-preload-20210915034542-22140\" mod_revision:362 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-no-preload-20210915034542-22140\" value_size:7146 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-no-preload-20210915034542-22140\" > >>","response":"size:16"}
	{"level":"info","ts":"2021-09-15T04:15:49.822Z","caller":"traceutil/trace.go:171","msg":"trace[1901173629] linearizableReadLoop","detail":"{readStateIndex:433; appliedIndex:431; }","duration":"114.7636ms","start":"2021-09-15T04:15:49.707Z","end":"2021-09-15T04:15:49.822Z","steps":["trace[1901173629] 'read index received'  (duration: 67.7716ms)","trace[1901173629] 'applied index is now lower than readState.Index'  (duration: 46.9891ms)"],"step_count":2}
	{"level":"info","ts":"2021-09-15T04:15:49.822Z","caller":"traceutil/trace.go:171","msg":"trace[446722683] transaction","detail":"{read_only:false; response_revision:422; number_of_response:1; }","duration":"145.3035ms","start":"2021-09-15T04:15:49.677Z","end":"2021-09-15T04:15:49.822Z","steps":["trace[446722683] 'process raft request'  (duration: 12.226ms)","trace[446722683] 'compare'  (duration: 132.0732ms)"],"step_count":2}
	{"level":"info","ts":"2021-09-15T04:15:49.822Z","caller":"traceutil/trace.go:171","msg":"trace[523410861] transaction","detail":"{read_only:false; response_revision:423; number_of_response:1; }","duration":"116.7705ms","start":"2021-09-15T04:15:49.705Z","end":"2021-09-15T04:15:49.822Z","steps":["trace[523410861] 'process raft request'  (duration: 116.064ms)"],"step_count":1}
	{"level":"info","ts":"2021-09-15T04:15:49.841Z","caller":"traceutil/trace.go:171","msg":"trace[357340551] transaction","detail":"{read_only:false; response_revision:424; number_of_response:1; }","duration":"133.2142ms","start":"2021-09-15T04:15:49.708Z","end":"2021-09-15T04:15:49.841Z","steps":["trace[357340551] 'process raft request'  (duration: 113.8423ms)"],"step_count":1}
	{"level":"warn","ts":"2021-09-15T04:15:49.841Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"134.2212ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/kube-proxy\" ","response":"range_response_count:1 size:226"}
	{"level":"info","ts":"2021-09-15T04:15:49.841Z","caller":"traceutil/trace.go:171","msg":"trace[601630852] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kube-proxy; range_end:; response_count:1; response_revision:426; }","duration":"134.263ms","start":"2021-09-15T04:15:49.707Z","end":"2021-09-15T04:15:49.841Z","steps":["trace[601630852] 'agreement among raft nodes before linearized reading'  (duration: 134.1839ms)"],"step_count":1}
	{"level":"info","ts":"2021-09-15T04:15:49.841Z","caller":"traceutil/trace.go:171","msg":"trace[489261989] transaction","detail":"{read_only:false; response_revision:425; number_of_response:1; }","duration":"133.2576ms","start":"2021-09-15T04:15:49.708Z","end":"2021-09-15T04:15:49.841Z","steps":["trace[489261989] 'process raft request'  (duration: 113.6291ms)"],"step_count":1}
	{"level":"warn","ts":"2021-09-15T04:15:51.194Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"166.9087ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722563405476796841 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/coredns-78fcd69978-s68wr\" mod_revision:426 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-78fcd69978-s68wr\" value_size:3639 >> failure:<request_range:<key:\"/registry/pods/kube-system/coredns-78fcd69978-s68wr\" > >>","response":"size:16"}
	{"level":"info","ts":"2021-09-15T04:15:51.204Z","caller":"traceutil/trace.go:171","msg":"trace[813589054] transaction","detail":"{read_only:false; response_revision:444; number_of_response:1; }","duration":"227.4505ms","start":"2021-09-15T04:15:50.977Z","end":"2021-09-15T04:15:51.204Z","steps":["trace[813589054] 'process raft request'  (duration: 49.8888ms)","trace[813589054] 'compare'  (duration: 166.5891ms)"],"step_count":2}
	{"level":"info","ts":"2021-09-15T04:16:02.064Z","caller":"traceutil/trace.go:171","msg":"trace[567018582] transaction","detail":"{read_only:false; response_revision:467; number_of_response:1; }","duration":"100.4535ms","start":"2021-09-15T04:16:01.963Z","end":"2021-09-15T04:16:02.064Z","steps":["trace[567018582] 'process raft request'  (duration: 33.0389ms)","trace[567018582] 'compare'  (duration: 57.0327ms)"],"step_count":2}
	{"level":"info","ts":"2021-09-15T04:16:02.114Z","caller":"traceutil/trace.go:171","msg":"trace[674575707] transaction","detail":"{read_only:false; response_revision:468; number_of_response:1; }","duration":"150.0024ms","start":"2021-09-15T04:16:01.964Z","end":"2021-09-15T04:16:02.114Z","steps":["trace[674575707] 'process raft request'  (duration: 101.9005ms)"],"step_count":1}
	{"level":"info","ts":"2021-09-15T04:16:02.184Z","caller":"traceutil/trace.go:171","msg":"trace[1101273592] transaction","detail":"{read_only:false; response_revision:469; number_of_response:1; }","duration":"202.8178ms","start":"2021-09-15T04:16:01.980Z","end":"2021-09-15T04:16:02.183Z","steps":["trace[1101273592] 'process raft request'  (duration: 134.2505ms)","trace[1101273592] 'compare'  (duration: 50.902ms)"],"step_count":2}
	
	* 
	* ==> kernel <==
	*  04:16:04 up  2:52,  0 users,  load average: 31.68, 46.62, 42.60
	Linux no-preload-20210915034542-22140 4.19.121-linuxkit #1 SMP Thu Jan 21 15:36:34 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [7228bd0a6eeb] <==
	* Trace[1282879176]: ---"Object stored in database" 439ms (04:15:47.502)
	Trace[1282879176]: [682.3441ms] [682.3441ms] END
	I0915 04:15:47.609240       1 trace.go:205] Trace[860050311]: "Create" url:/api/v1/namespaces/kube-system/serviceaccounts/clusterrole-aggregation-controller/token,user-agent:kube-controller-manager/v1.22.2 (linux/amd64) kubernetes/55ab142/kube-controller-manager,audit-id:4c975229-e0b5-4a79-9e67-9cfc9f528d3e,client:192.168.85.2,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (15-Sep-2021 04:15:46.951) (total time: 657ms):
	Trace[860050311]: ---"Object stored in database" 656ms (04:15:47.609)
	Trace[860050311]: [657.1608ms] [657.1608ms] END
	I0915 04:15:47.857413       1 trace.go:205] Trace[1989622622]: "GuaranteedUpdate etcd3" type:*apps.Deployment (15-Sep-2021 04:15:47.305) (total time: 551ms):
	Trace[1989622622]: ---"Transaction prepared" 304ms (04:15:47.610)
	Trace[1989622622]: ---"Transaction committed" 246ms (04:15:47.857)
	Trace[1989622622]: [551.6477ms] [551.6477ms] END
	I0915 04:15:47.906367       1 trace.go:205] Trace[300394668]: "Update" url:/apis/apps/v1/namespaces/kube-system/deployments/coredns/status,user-agent:kube-controller-manager/v1.22.2 (linux/amd64) kubernetes/55ab142/system:serviceaccount:kube-system:deployment-controller,audit-id:42d8caab-1b7a-48b0-9706-37167a58a6dd,client:192.168.85.2,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (15-Sep-2021 04:15:47.279) (total time: 627ms):
	Trace[300394668]: ---"Object stored in database" 551ms (04:15:47.857)
	Trace[300394668]: [627.0147ms] [627.0147ms] END
	I0915 04:15:48.112998       1 trace.go:205] Trace[1682664236]: "Create" url:/api/v1/namespaces/kube-system/serviceaccounts/daemon-set-controller/token,user-agent:kube-controller-manager/v1.22.2 (linux/amd64) kubernetes/55ab142/kube-controller-manager,audit-id:f98283a1-b241-4076-80b6-7cb446f759a3,client:192.168.85.2,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (15-Sep-2021 04:15:47.522) (total time: 590ms):
	Trace[1682664236]: ---"Object stored in database" 590ms (04:15:48.112)
	Trace[1682664236]: [590.2472ms] [590.2472ms] END
	I0915 04:15:48.123213       1 trace.go:205] Trace[1522935590]: "Create" url:/api/v1/namespaces/kube-system/serviceaccounts/endpoint-controller/token,user-agent:kube-controller-manager/v1.22.2 (linux/amd64) kubernetes/55ab142/kube-controller-manager,audit-id:6052d769-189c-428c-a727-9d0bbfb09cb6,client:192.168.85.2,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (15-Sep-2021 04:15:47.535) (total time: 587ms):
	Trace[1522935590]: ---"Object stored in database" 497ms (04:15:48.122)
	Trace[1522935590]: [587.847ms] [587.847ms] END
	I0915 04:15:48.186648       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0915 04:15:49.924515       1 trace.go:205] Trace[264302366]: "Create" url:/api/v1/namespaces/kube-node-lease/serviceaccounts,user-agent:kube-controller-manager/v1.22.2 (linux/amd64) kubernetes/55ab142/system:serviceaccount:kube-system:service-account-controller,audit-id:53195c78-0a0b-49c1-9204-00e0034d88c8,client:192.168.85.2,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (15-Sep-2021 04:15:49.081) (total time: 843ms):
	Trace[264302366]: ---"Object stored in database" 842ms (04:15:49.924)
	Trace[264302366]: [843.1686ms] [843.1686ms] END
	I0915 04:15:50.087393       1 trace.go:205] Trace[129247946]: "Create" url:/api/v1/namespaces/kube-system/serviceaccounts/kube-proxy/token,user-agent:kubelet/v1.22.2 (linux/amd64) kubernetes/55ab142,audit-id:c5bbc3c0-1a9b-4b52-8df2-dba971792fbe,client:192.168.85.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (15-Sep-2021 04:15:49.582) (total time: 500ms):
	Trace[129247946]: ---"Object stored in database" 500ms (04:15:50.082)
	Trace[129247946]: [500.6123ms] [500.6123ms] END
	
	* 
	* ==> kube-controller-manager [7e5b2579af08] <==
	* 	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:591 +0xa5
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext(0x51d55d0, 0xc000694080, 0xdf8475800, 0xc0001b2140, 0x0, 0x0)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:542 +0x65
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xdf8475800, 0xc0001b2120, 0xc000114360, 0x0, 0x0)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:533 +0xa5
	created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicFileCAContent).Run
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/dynamic_cafile_content.go:162 +0x328
	
	goroutine 168 [select]:
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.contextForChannel.func1(0xc000114360, 0xc0001b2130, 0x51d55d0, 0xc000694080)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:298 +0x87
	created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.contextForChannel
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:297 +0x8c
	
	goroutine 169 [select]:
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poller.func1.1(0xc000114900, 0xdf8475800, 0x0, 0x51d55d0, 0xc0006940c0)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:705 +0x156
	created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poller.func1
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:688 +0x96
	
	goroutine 176 [runnable]:
	net/http.setRequestCancel.func4(0x0, 0xc00106c810, 0xc0006e8370, 0xc0010603fc, 0xc000115260)
		/usr/local/go/src/net/http/client.go:397 +0x96
	created by net/http.setRequestCancel
		/usr/local/go/src/net/http/client.go:396 +0x337
	
	* 
	* ==> kube-controller-manager [a8b7111440bc] <==
	* I0915 04:15:46.778096       1 disruption.go:371] Sending events to api server.
	I0915 04:15:46.779658       1 shared_informer.go:247] Caches are synced for ephemeral 
	I0915 04:15:46.784492       1 shared_informer.go:247] Caches are synced for job 
	I0915 04:15:46.784753       1 range_allocator.go:373] Set node no-preload-20210915034542-22140 PodCIDR to [10.244.0.0/24]
	I0915 04:15:46.784932       1 shared_informer.go:247] Caches are synced for PVC protection 
	I0915 04:15:46.806634       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	I0915 04:15:46.807006       1 shared_informer.go:247] Caches are synced for HPA 
	I0915 04:15:46.807158       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0915 04:15:46.807195       1 shared_informer.go:247] Caches are synced for endpoint 
	I0915 04:15:46.814156       1 shared_informer.go:247] Caches are synced for taint 
	I0915 04:15:46.814379       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: 
	W0915 04:15:46.814522       1 node_lifecycle_controller.go:1013] Missing timestamp for Node no-preload-20210915034542-22140. Assuming now as a timestamp.
	I0915 04:15:46.814673       1 node_lifecycle_controller.go:1164] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0915 04:15:46.823699       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0915 04:15:46.833334       1 event.go:291] "Event occurred" object="no-preload-20210915034542-22140" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node no-preload-20210915034542-22140 event: Registered Node no-preload-20210915034542-22140 in Controller"
	I0915 04:15:46.857718       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0915 04:15:46.860372       1 shared_informer.go:247] Caches are synced for resource quota 
	I0915 04:15:47.239481       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-78fcd69978 to 2"
	I0915 04:15:47.558068       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0915 04:15:47.575524       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0915 04:15:47.575544       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0915 04:15:48.705203       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-nwp6n"
	I0915 04:15:48.971291       1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-78fcd69978-gx5vz"
	I0915 04:15:49.735406       1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-78fcd69978-s68wr"
	I0915 04:15:51.816084       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	
	* 
	* ==> kube-proxy [497edfc1359b] <==
	* I0915 04:15:58.401572       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0915 04:15:58.402209       1 server_others.go:140] Detected node IP 192.168.85.2
	W0915 04:15:58.402308       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
	I0915 04:15:58.902139       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0915 04:15:58.902219       1 server_others.go:212] Using iptables Proxier.
	I0915 04:15:58.902246       1 server_others.go:219] creating dualStackProxier for iptables.
	W0915 04:15:58.902301       1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0915 04:15:58.919519       1 server.go:649] Version: v1.22.2-rc.0
	I0915 04:15:58.962303       1 config.go:315] Starting service config controller
	I0915 04:15:58.977996       1 config.go:224] Starting endpoint slice config controller
	I0915 04:15:58.984282       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0915 04:15:58.978070       1 shared_informer.go:240] Waiting for caches to sync for service config
	E0915 04:15:59.081937       1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"no-preload-20210915034542-22140.16a4e3bb643b88b4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc0487abfba3f6788, ext:1096587601, loc:(*time.Location)(0x2d81340)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-no-preload-20210915034542-22140", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"", Na
me:"no-preload-20210915034542-22140", UID:"no-preload-20210915034542-22140", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "no-preload-20210915034542-22140.16a4e3bb643b88b4" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
	I0915 04:15:59.188838       1 shared_informer.go:247] Caches are synced for service config 
	I0915 04:15:59.204732       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [bfaa8d89f44a] <==
	* E0915 04:15:12.827664       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0915 04:15:12.899357       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0915 04:15:13.077133       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0915 04:15:13.105691       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0915 04:15:13.291307       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0915 04:15:13.366242       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0915 04:15:13.370892       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0915 04:15:13.501009       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0915 04:15:13.726063       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0915 04:15:13.930100       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0915 04:15:13.950002       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0915 04:15:14.077965       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0915 04:15:14.124593       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0915 04:15:16.618157       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0915 04:15:16.721072       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0915 04:15:17.165812       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0915 04:15:17.680597       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0915 04:15:17.898657       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0915 04:15:18.048731       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0915 04:15:18.437352       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0915 04:15:18.512218       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0915 04:15:18.625233       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0915 04:15:18.646083       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0915 04:15:18.930942       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0915 04:15:27.160631       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2021-09-15 04:04:34 UTC, end at Wed 2021-09-15 04:16:06 UTC. --
	Sep 15 04:15:46 no-preload-20210915034542-22140 kubelet[6877]: I0915 04:15:46.434052    6877 docker_service.go:359] "Docker cri received runtime config" runtimeConfig="&RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Sep 15 04:15:46 no-preload-20210915034542-22140 kubelet[6877]: I0915 04:15:46.434257    6877 kubelet_network.go:76] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 15 04:15:49 no-preload-20210915034542-22140 kubelet[6877]: I0915 04:15:49.052283    6877 topology_manager.go:200] "Topology Admit Handler"
	Sep 15 04:15:49 no-preload-20210915034542-22140 kubelet[6877]: I0915 04:15:49.361082    6877 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6d75f136-5623-4eda-a1b5-dba42802fcb6-xtables-lock\") pod \"kube-proxy-nwp6n\" (UID: \"6d75f136-5623-4eda-a1b5-dba42802fcb6\") "
	Sep 15 04:15:49 no-preload-20210915034542-22140 kubelet[6877]: I0915 04:15:49.361157    6877 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6d75f136-5623-4eda-a1b5-dba42802fcb6-lib-modules\") pod \"kube-proxy-nwp6n\" (UID: \"6d75f136-5623-4eda-a1b5-dba42802fcb6\") "
	Sep 15 04:15:49 no-preload-20210915034542-22140 kubelet[6877]: I0915 04:15:49.361206    6877 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk775\" (UniqueName: \"kubernetes.io/projected/6d75f136-5623-4eda-a1b5-dba42802fcb6-kube-api-access-lk775\") pod \"kube-proxy-nwp6n\" (UID: \"6d75f136-5623-4eda-a1b5-dba42802fcb6\") "
	Sep 15 04:15:49 no-preload-20210915034542-22140 kubelet[6877]: I0915 04:15:49.361260    6877 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6d75f136-5623-4eda-a1b5-dba42802fcb6-kube-proxy\") pod \"kube-proxy-nwp6n\" (UID: \"6d75f136-5623-4eda-a1b5-dba42802fcb6\") "
	Sep 15 04:15:51 no-preload-20210915034542-22140 kubelet[6877]: I0915 04:15:51.586044    6877 topology_manager.go:200] "Topology Admit Handler"
	Sep 15 04:15:51 no-preload-20210915034542-22140 kubelet[6877]: I0915 04:15:51.587014    6877 topology_manager.go:200] "Topology Admit Handler"
	Sep 15 04:15:51 no-preload-20210915034542-22140 kubelet[6877]: I0915 04:15:51.605481    6877 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgkc9\" (UniqueName: \"kubernetes.io/projected/4fa3a344-e0bd-49cf-b895-13803879b716-kube-api-access-lgkc9\") pod \"coredns-78fcd69978-gx5vz\" (UID: \"4fa3a344-e0bd-49cf-b895-13803879b716\") "
	Sep 15 04:15:51 no-preload-20210915034542-22140 kubelet[6877]: I0915 04:15:51.605574    6877 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ede7396-ef56-4632-a9c4-e93bdaa79295-config-volume\") pod \"coredns-78fcd69978-s68wr\" (UID: \"0ede7396-ef56-4632-a9c4-e93bdaa79295\") "
	Sep 15 04:15:51 no-preload-20210915034542-22140 kubelet[6877]: I0915 04:15:51.605647    6877 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64b8l\" (UniqueName: \"kubernetes.io/projected/0ede7396-ef56-4632-a9c4-e93bdaa79295-kube-api-access-64b8l\") pod \"coredns-78fcd69978-s68wr\" (UID: \"0ede7396-ef56-4632-a9c4-e93bdaa79295\") "
	Sep 15 04:15:51 no-preload-20210915034542-22140 kubelet[6877]: I0915 04:15:51.605711    6877 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4fa3a344-e0bd-49cf-b895-13803879b716-config-volume\") pod \"coredns-78fcd69978-gx5vz\" (UID: \"4fa3a344-e0bd-49cf-b895-13803879b716\") "
	Sep 15 04:15:52 no-preload-20210915034542-22140 kubelet[6877]: W0915 04:15:52.088271    6877 container.go:586] Failed to update stats for container "/kubepods/burstable/pod0ede7396-ef56-4632-a9c4-e93bdaa79295": /sys/fs/cgroup/cpuset/kubepods/burstable/pod0ede7396-ef56-4632-a9c4-e93bdaa79295/cpuset.cpus found to be empty, continuing to push stats
	Sep 15 04:15:58 no-preload-20210915034542-22140 kubelet[6877]: I0915 04:15:58.001746    6877 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-78fcd69978-s68wr through plugin: invalid network status for"
	Sep 15 04:15:58 no-preload-20210915034542-22140 kubelet[6877]: I0915 04:15:58.596515    6877 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-78fcd69978-gx5vz through plugin: invalid network status for"
	Sep 15 04:15:58 no-preload-20210915034542-22140 kubelet[6877]: I0915 04:15:58.615654    6877 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="6af6fc3091f646ac45636a2ca484abe9adf7934ece42c772550eacca51b3d73b"
	Sep 15 04:15:58 no-preload-20210915034542-22140 kubelet[6877]: I0915 04:15:58.623314    6877 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-78fcd69978-s68wr through plugin: invalid network status for"
	Sep 15 04:15:58 no-preload-20210915034542-22140 kubelet[6877]: I0915 04:15:58.798237    6877 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="16b17b2fdf1168a173d2520ddb42fc896f2e5c70eaeecdf72280fff783be31a2"
	Sep 15 04:15:58 no-preload-20210915034542-22140 kubelet[6877]: I0915 04:15:58.908288    6877 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="be468b4602d1bbc81ff089607dc51aa40134fd565748156ae192bd678155dbae"
	Sep 15 04:16:00 no-preload-20210915034542-22140 kubelet[6877]: I0915 04:16:00.026243    6877 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-78fcd69978-gx5vz through plugin: invalid network status for"
	Sep 15 04:16:01 no-preload-20210915034542-22140 kubelet[6877]: I0915 04:16:01.264587    6877 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-78fcd69978-s68wr through plugin: invalid network status for"
	Sep 15 04:16:02 no-preload-20210915034542-22140 kubelet[6877]: E0915 04:16:02.119150    6877 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods/burstable/pod0ede7396-ef56-4632-a9c4-e93bdaa79295\": RecentStats: unable to find data in memory cache]"
	Sep 15 04:16:02 no-preload-20210915034542-22140 kubelet[6877]: I0915 04:16:02.516141    6877 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-78fcd69978-gx5vz through plugin: invalid network status for"
	Sep 15 04:16:02 no-preload-20210915034542-22140 kubelet[6877]: I0915 04:16:02.782680    6877 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-78fcd69978-s68wr through plugin: invalid network status for"
	

                                                
                                                
-- /stdout --

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
helpers_test.go:255: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-20210915034542-22140 -n no-preload-20210915034542-22140

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
helpers_test.go:255: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-20210915034542-22140 -n no-preload-20210915034542-22140: (4.5657494s)
helpers_test.go:262: (dbg) Run:  kubectl --context no-preload-20210915034542-22140 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:262: (dbg) Done: kubectl --context no-preload-20210915034542-22140 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: (1.0599923s)
helpers_test.go:271: non-running pods: 
helpers_test.go:273: ======> post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context no-preload-20210915034542-22140 describe pod 
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context no-preload-20210915034542-22140 describe pod : exit status 1 (254.9581ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context no-preload-20210915034542-22140 describe pod : exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (720.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (0s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe start -p auto-20210915032655-22140 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker
net_test.go:99: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p auto-20210915032655-22140 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker: context deadline exceeded (183.5µs)
net_test.go:101: failed start: context deadline exceeded
--- FAIL: TestNetworkPlugins/group/auto/Start (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (0s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe start -p false-20210915032703-22140 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker
net_test.go:99: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p false-20210915032703-22140 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker: context deadline exceeded (0s)
net_test.go:101: failed start: context deadline exceeded
--- FAIL: TestNetworkPlugins/group/false/Start (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (0s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe start -p cilium-20210915032703-22140 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker
net_test.go:99: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p cilium-20210915032703-22140 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker: context deadline exceeded (0s)
net_test.go:101: failed start: context deadline exceeded
--- FAIL: TestNetworkPlugins/group/cilium/Start (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (0s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe start -p calico-20210915032703-22140 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker
net_test.go:99: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p calico-20210915032703-22140 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker: context deadline exceeded (0s)
net_test.go:101: failed start: context deadline exceeded
--- FAIL: TestNetworkPlugins/group/calico/Start (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (0s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe start -p enable-default-cni-20210915032655-22140 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker
net_test.go:99: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p enable-default-cni-20210915032655-22140 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker: context deadline exceeded (139.1µs)
net_test.go:101: failed start: context deadline exceeded
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/Start (0s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/Start
net_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-weave-20210915032703-22140 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata\weavenet.yaml --driver=docker
net_test.go:99: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p custom-weave-20210915032703-22140 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata\weavenet.yaml --driver=docker: context deadline exceeded (874.1µs)
net_test.go:101: failed start: context deadline exceeded
--- FAIL: TestNetworkPlugins/group/custom-weave/Start (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (0s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe start -p kindnet-20210915032703-22140 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker
net_test.go:99: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kindnet-20210915032703-22140 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker: context deadline exceeded (0s)
net_test.go:101: failed start: context deadline exceeded
--- FAIL: TestNetworkPlugins/group/kindnet/Start (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (0s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe start -p bridge-20210915032655-22140 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker
net_test.go:99: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p bridge-20210915032655-22140 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker: context deadline exceeded (0s)
net_test.go:101: failed start: context deadline exceeded
--- FAIL: TestNetworkPlugins/group/bridge/Start (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (0s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubenet-20210915032655-22140 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker
net_test.go:99: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubenet-20210915032655-22140 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker: context deadline exceeded (1.0153ms)
net_test.go:101: failed start: context deadline exceeded
--- FAIL: TestNetworkPlugins/group/kubenet/Start (0.00s)

                                                
                                    

Test pass (194/232)

Order passed test Duration
3 TestDownloadOnly/v1.14.0/json-events 15.47
4 TestDownloadOnly/v1.14.0/preload-exists 0.01
7 TestDownloadOnly/v1.14.0/kubectl 0
8 TestDownloadOnly/v1.14.0/LogsDuration 0.6
10 TestDownloadOnly/v1.22.1/json-events 14.17
11 TestDownloadOnly/v1.22.1/preload-exists 0
14 TestDownloadOnly/v1.22.1/kubectl 0
15 TestDownloadOnly/v1.22.1/LogsDuration 0.58
17 TestDownloadOnly/v1.22.2-rc.0/json-events 11.92
18 TestDownloadOnly/v1.22.2-rc.0/preload-exists 0
21 TestDownloadOnly/v1.22.2-rc.0/kubectl 0
22 TestDownloadOnly/v1.22.2-rc.0/LogsDuration 0.57
23 TestDownloadOnly/DeleteAll 6.04
24 TestDownloadOnly/DeleteAlwaysSucceeds 3.62
25 TestDownloadOnlyKic 44.52
26 TestOffline 662.91
28 TestAddons/Setup 801.03
31 TestAddons/parallel/Ingress 90.93
32 TestAddons/parallel/MetricsServer 16.55
33 TestAddons/parallel/HelmTiller 69.06
34 TestAddons/parallel/Olm 432.09
35 TestAddons/parallel/CSI 261.93
36 TestAddons/parallel/GCPAuth 351.14
37 TestAddons/StoppedEnableDisable 30.19
39 TestDockerFlags 534.27
40 TestForceSystemdFlag 400.31
41 TestForceSystemdEnv 569.46
46 TestErrorSpam/setup 197.12
47 TestErrorSpam/start 13.01
48 TestErrorSpam/status 14.02
49 TestErrorSpam/pause 14.52
50 TestErrorSpam/unpause 14.93
51 TestErrorSpam/stop 28.91
54 TestFunctional/serial/CopySyncFile 0.05
55 TestFunctional/serial/StartWithProxy 209.25
56 TestFunctional/serial/AuditLog 0
57 TestFunctional/serial/SoftStart 29.83
58 TestFunctional/serial/KubeContext 0.2
59 TestFunctional/serial/KubectlGetPods 0.49
62 TestFunctional/serial/CacheCmd/cache/add_remote 15.42
63 TestFunctional/serial/CacheCmd/cache/add_local 8.02
64 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.53
65 TestFunctional/serial/CacheCmd/cache/list 0.44
66 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 4.93
67 TestFunctional/serial/CacheCmd/cache/cache_reload 16.46
68 TestFunctional/serial/CacheCmd/cache/delete 0.95
69 TestFunctional/serial/MinikubeKubectlCmd 1.34
70 TestFunctional/serial/MinikubeKubectlCmdDirectly 2.04
71 TestFunctional/serial/ExtraConfig 114.52
72 TestFunctional/serial/ComponentHealth 0.34
73 TestFunctional/serial/LogsCmd 7.74
74 TestFunctional/serial/LogsFileCmd 8.97
76 TestFunctional/parallel/ConfigCmd 3.32
78 TestFunctional/parallel/DryRun 6.78
79 TestFunctional/parallel/InternationalLanguage 2.77
80 TestFunctional/parallel/StatusCmd 13.86
84 TestFunctional/parallel/AddonsCmd 1.9
85 TestFunctional/parallel/PersistentVolumeClaim 64.24
87 TestFunctional/parallel/SSHCmd 8.84
88 TestFunctional/parallel/CpCmd 10.44
89 TestFunctional/parallel/MySQL 128.39
90 TestFunctional/parallel/FileSync 5.46
91 TestFunctional/parallel/CertSync 27.98
95 TestFunctional/parallel/NodeLabels 0.32
96 TestFunctional/parallel/LoadImage 16.66
97 TestFunctional/parallel/SaveImage 16.81
98 TestFunctional/parallel/RemoveImage 19.5
100 TestFunctional/parallel/SaveImageToFile 20.68
101 TestFunctional/parallel/BuildImage 16.36
102 TestFunctional/parallel/ListImages 4.28
103 TestFunctional/parallel/NonActiveRuntimeDisabled 5.79
105 TestFunctional/parallel/ProfileCmd/profile_not_create 5.95
106 TestFunctional/parallel/ProfileCmd/profile_list 6.45
107 TestFunctional/parallel/ProfileCmd/profile_json_output 5.43
108 TestFunctional/parallel/Version/short 0.68
109 TestFunctional/parallel/Version/components 7.87
111 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
113 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 101.82
114 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.38
119 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
120 TestFunctional/parallel/DockerEnv/powershell 21.24
121 TestFunctional/parallel/UpdateContextCmd/no_changes 3.58
122 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 3.37
123 TestFunctional/parallel/UpdateContextCmd/no_clusters 1.86
124 TestFunctional/delete_busybox_image 1.78
125 TestFunctional/delete_my-image_image 0.77
126 TestFunctional/delete_minikube_cached_images 0.75
130 TestJSONOutput/start/Command 223.26
131 TestJSONOutput/start/Audit 0
133 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
134 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
136 TestJSONOutput/pause/Command 6.07
137 TestJSONOutput/pause/Audit 0
139 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
140 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
142 TestJSONOutput/unpause/Command 5.46
143 TestJSONOutput/unpause/Audit 0
145 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
146 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
148 TestJSONOutput/stop/Command 18.84
149 TestJSONOutput/stop/Audit 0
151 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
152 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
153 TestErrorJSONOutput 5.43
155 TestKicCustomNetwork/create_custom_network 209.73
156 TestKicCustomNetwork/use_default_bridge_network 209.41
157 TestKicExistingNetwork 211.35
158 TestMainNoArgs 0.49
161 TestMultiNode/serial/FreshStart2Nodes 405.79
162 TestMultiNode/serial/DeployApp2Nodes 25.42
163 TestMultiNode/serial/PingHostFrom2Pods 9.46
164 TestMultiNode/serial/AddNode 163.65
165 TestMultiNode/serial/ProfileList 5.17
166 TestMultiNode/serial/CopyFile 35.45
167 TestMultiNode/serial/StopNode 21.75
168 TestMultiNode/serial/StartAfterStop 122.52
169 TestMultiNode/serial/RestartKeepsNodes 241.26
170 TestMultiNode/serial/DeleteNode 32.83
171 TestMultiNode/serial/StopMultiNode 38.7
172 TestMultiNode/serial/RestartMultiNode 210.24
173 TestMultiNode/serial/ValidateNameConflict 254.15
178 TestDebPackageInstall/install_amd64_debian_sid/minikube 0
179 TestDebPackageInstall/install_amd64_debian_sid/kvm2-driver 0
181 TestDebPackageInstall/install_amd64_debian_latest/minikube 0
182 TestDebPackageInstall/install_amd64_debian_latest/kvm2-driver 0
184 TestDebPackageInstall/install_amd64_debian_10/minikube 0
185 TestDebPackageInstall/install_amd64_debian_10/kvm2-driver 0
187 TestDebPackageInstall/install_amd64_debian_9/minikube 0
188 TestDebPackageInstall/install_amd64_debian_9/kvm2-driver 0
190 TestDebPackageInstall/install_amd64_ubuntu_latest/minikube 0
191 TestDebPackageInstall/install_amd64_ubuntu_latest/kvm2-driver 0
193 TestDebPackageInstall/install_amd64_ubuntu_20.10/minikube 0
194 TestDebPackageInstall/install_amd64_ubuntu_20.10/kvm2-driver 0
196 TestDebPackageInstall/install_amd64_ubuntu_20.04/minikube 0
197 TestDebPackageInstall/install_amd64_ubuntu_20.04/kvm2-driver 0
199 TestDebPackageInstall/install_amd64_ubuntu_18.04/minikube 0
200 TestDebPackageInstall/install_amd64_ubuntu_18.04/kvm2-driver 0
201 TestPreload 498.1
204 TestSkaffold 318.2
207 TestRunningBinaryUpgrade 1026.1
209 TestKubernetesUpgrade 1161.45
210 TestMissingContainerUpgrade 1174.76
212 TestPause/serial/Start 624.37
213 TestStoppedBinaryUpgrade/Upgrade 978.32
214 TestPause/serial/SecondStartNoReconfiguration 94.44
222 TestPause/serial/Pause 18.18
224 TestPause/serial/Unpause 13.37
225 TestPause/serial/PauseAgain 24.21
227 TestStoppedBinaryUpgrade/MinikubeLogs 20.87
240 TestStartStop/group/old-k8s-version/serial/FirstStart 589.99
242 TestStartStop/group/no-preload/serial/FirstStart 1028.22
243 TestStartStop/group/old-k8s-version/serial/DeployApp 20.47
245 TestStartStop/group/embed-certs/serial/FirstStart 621.97
246 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 7.72
248 TestStartStop/group/default-k8s-different-port/serial/FirstStart 593.75
249 TestStartStop/group/old-k8s-version/serial/Stop 23.98
250 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 4.75
251 TestStartStop/group/old-k8s-version/serial/SecondStart 827.15
252 TestStartStop/group/default-k8s-different-port/serial/DeployApp 68.47
253 TestStartStop/group/embed-certs/serial/DeployApp 47.41
254 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 16.21
255 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 15.54
256 TestStartStop/group/embed-certs/serial/Stop 33.96
257 TestStartStop/group/default-k8s-different-port/serial/Stop 36.81
258 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 4.07
259 TestStartStop/group/embed-certs/serial/SecondStart 920.34
260 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 4.64
261 TestStartStop/group/default-k8s-different-port/serial/SecondStart 949.69
262 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.17
263 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.06
264 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 5.58
265 TestStartStop/group/old-k8s-version/serial/Pause 55.37
266 TestStartStop/group/no-preload/serial/DeployApp 23.77
268 TestStartStop/group/newest-cni/serial/FirstStart 362.28
269 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 24.77
270 TestStartStop/group/no-preload/serial/Stop 31.75
271 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 3.12
273 TestStartStop/group/newest-cni/serial/DeployApp 0
274 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 18.74
275 TestStartStop/group/newest-cni/serial/Stop 32.21
276 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 4.05
277 TestStartStop/group/newest-cni/serial/SecondStart 235.95
278 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.29
279 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
280 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
281 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 6.76
282 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.98
283 TestStartStop/group/newest-cni/serial/Pause 48.34
284 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 6.23
285 TestStartStop/group/embed-certs/serial/Pause 56.09
286 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 5.25
287 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 6.5
288 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 7.87
289 TestStartStop/group/default-k8s-different-port/serial/Pause 53.81
x
+
TestDownloadOnly/v1.14.0/json-events (15.47s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/json-events
aaa_download_only_test.go:70: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20210915012818-22140 --force --alsologtostderr --kubernetes-version=v1.14.0 --container-runtime=docker --driver=docker
aaa_download_only_test.go:70: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20210915012818-22140 --force --alsologtostderr --kubernetes-version=v1.14.0 --container-runtime=docker --driver=docker: (15.4676814s)
--- PASS: TestDownloadOnly/v1.14.0/json-events (15.47s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/preload-exists (0.01s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/preload-exists
--- PASS: TestDownloadOnly/v1.14.0/preload-exists (0.01s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/kubectl
--- PASS: TestDownloadOnly/v1.14.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/LogsDuration (0.6s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-20210915012818-22140
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-20210915012818-22140: exit status 85 (594.304ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/09/15 01:28:18
	Running on machine: windows-server-1
	Binary: Built with gc go1.17 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0915 01:28:18.996083   43116 out.go:298] Setting OutFile to fd 632 ...
	I0915 01:28:18.998102   43116 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0915 01:28:18.998523   43116 out.go:311] Setting ErrFile to fd 636...
	I0915 01:28:18.998523   43116 out.go:345] TERM=,COLORTERM=, which probably does not support color
	W0915 01:28:19.020163   43116 root.go:291] Error reading config file at C:\Users\jenkins\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0915 01:28:19.027280   43116 out.go:305] Setting JSON to true
	I0915 01:28:19.034238   43116 start.go:111] hostinfo: {"hostname":"windows-server-1","uptime":10272081,"bootTime":1621397218,"procs":150,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9879231f-6171-435d-bab4-5b366cc6391b"}
	W0915 01:28:19.034238   43116 start.go:119] gopshost.Virtualization returned error: not implemented yet
	I0915 01:28:19.042634   43116 notify.go:169] Checking for updates...
	I0915 01:28:19.046758   43116 driver.go:343] Setting default libvirt URI to qemu:///system
	I0915 01:28:20.954921   43116 docker.go:132] docker version: linux-20.10.5
	I0915 01:28:20.965580   43116 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0915 01:28:21.934771   43116 info.go:263] docker info: {ID:6FWJ:GOIP:3UEJ:4EPV:BN5V:RES7:2PF6:QH5I:B3LP:YJP2:JQEM:LHWQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:47 SystemTime:2021-09-15 01:28:21.5192263 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0915 01:28:21.938290   43116 start.go:278] selected driver: docker
	I0915 01:28:21.938290   43116 start.go:751] validating driver "docker" against <nil>
	I0915 01:28:21.974734   43116 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0915 01:28:22.921866   43116 info.go:263] docker info: {ID:6FWJ:GOIP:3UEJ:4EPV:BN5V:RES7:2PF6:QH5I:B3LP:YJP2:JQEM:LHWQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:47 SystemTime:2021-09-15 01:28:22.5203983 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0915 01:28:22.922491   43116 start_flags.go:264] no existing cluster config was found, will generate one from the flags 
	I0915 01:28:23.015478   43116 start_flags.go:345] Using suggested 15300MB memory alloc based on sys=61438MB, container=20001MB
	I0915 01:28:23.017357   43116 start_flags.go:719] Wait components to verify : map[apiserver:true system_pods:true]
	I0915 01:28:23.018061   43116 cni.go:93] Creating CNI manager for ""
	I0915 01:28:23.018572   43116 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0915 01:28:23.018918   43116 start_flags.go:278] config:
	{Name:download-only-20210915012818-22140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:15300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:download-only-20210915012818-22140 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0915 01:28:23.025459   43116 cache.go:118] Beginning downloading kic base image for docker with docker
	I0915 01:28:23.028857   43116 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime docker
	I0915 01:28:23.028857   43116 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 in local docker daemon
	I0915 01:28:23.060283   43116 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v12-v1.14.0-docker-overlay2-amd64.tar.lz4
	I0915 01:28:23.060283   43116 cache.go:57] Caching tarball of preloaded images
	I0915 01:28:23.061285   43116 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime docker
	I0915 01:28:23.064273   43116 preload.go:237] getting checksum for preloaded-images-k8s-v12-v1.14.0-docker-overlay2-amd64.tar.lz4 ...
	I0915 01:28:23.091274   43116 download.go:92] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v12-v1.14.0-docker-overlay2-amd64.tar.lz4?checksum=md5:f9e1bc5997daac3e4aca6f6bb5ce5b14 -> C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v12-v1.14.0-docker-overlay2-amd64.tar.lz4
	I0915 01:28:23.786231   43116 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 to local cache
	I0915 01:28:23.786231   43116 localpath.go:146] windows sanitize: C:\Users\jenkins\minikube-integration\.minikube\cache\kic\kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56.tar -> C:\Users\jenkins\minikube-integration\.minikube\cache\kic\kicbase-builds_v0.0.26-1631295795-12425@sha256_7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56.tar
	I0915 01:28:23.787232   43116 localpath.go:146] windows sanitize: C:\Users\jenkins\minikube-integration\.minikube\cache\kic\kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56.tar -> C:\Users\jenkins\minikube-integration\.minikube\cache\kic\kicbase-builds_v0.0.26-1631295795-12425@sha256_7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56.tar
	I0915 01:28:23.787232   43116 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 in local cache directory
	I0915 01:28:23.788294   43116 image.go:119] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 to local cache
	I0915 01:28:27.855056   43116 preload.go:247] saving checksum for preloaded-images-k8s-v12-v1.14.0-docker-overlay2-amd64.tar.lz4 ...
	I0915 01:28:27.857063   43116 preload.go:254] verifying checksumm of C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v12-v1.14.0-docker-overlay2-amd64.tar.lz4 ...
	I0915 01:28:30.369816   43116 cache.go:60] Finished verifying existence of preloaded tar for  v1.14.0 on docker
	I0915 01:28:30.371457   43116 profile.go:148] Saving config to C:\Users\jenkins\minikube-integration\.minikube\profiles\download-only-20210915012818-22140\config.json ...
	I0915 01:28:30.371814   43116 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\.minikube\profiles\download-only-20210915012818-22140\config.json: {Name:mk3c2bb1d477c3f6ab384ed622b8c20e23e3d4b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 01:28:30.374084   43116 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime docker
	I0915 01:28:30.378250   43116 download.go:92] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.14.0/bin/windows/amd64/kubectl.exe?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.14.0/bin/windows/amd64/kubectl.exe.sha1 -> C:\Users\jenkins\minikube-integration\.minikube\cache\windows\v1.14.0/kubectl.exe
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210915012818-22140"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.14.0/LogsDuration (0.60s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.1/json-events (14.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.1/json-events
aaa_download_only_test.go:70: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20210915012818-22140 --force --alsologtostderr --kubernetes-version=v1.22.1 --container-runtime=docker --driver=docker
aaa_download_only_test.go:70: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20210915012818-22140 --force --alsologtostderr --kubernetes-version=v1.22.1 --container-runtime=docker --driver=docker: (14.1717972s)
--- PASS: TestDownloadOnly/v1.22.1/json-events (14.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.1/preload-exists
--- PASS: TestDownloadOnly/v1.22.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.1/kubectl
--- PASS: TestDownloadOnly/v1.22.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.1/LogsDuration (0.58s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.1/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-20210915012818-22140
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-20210915012818-22140: exit status 85 (579.1544ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/09/15 01:28:35
	Running on machine: windows-server-1
	Binary: Built with gc go1.17 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0915 01:28:35.040648   50620 out.go:298] Setting OutFile to fd 708 ...
	I0915 01:28:35.042057   50620 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0915 01:28:35.042057   50620 out.go:311] Setting ErrFile to fd 712...
	I0915 01:28:35.042057   50620 out.go:345] TERM=,COLORTERM=, which probably does not support color
	W0915 01:28:35.068046   50620 root.go:291] Error reading config file at C:\Users\jenkins\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins\minikube-integration\.minikube\config\config.json: The system cannot find the file specified.
	I0915 01:28:35.069499   50620 out.go:305] Setting JSON to true
	I0915 01:28:35.076589   50620 start.go:111] hostinfo: {"hostname":"windows-server-1","uptime":10272097,"bootTime":1621397218,"procs":150,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9879231f-6171-435d-bab4-5b366cc6391b"}
	W0915 01:28:35.077155   50620 start.go:119] gopshost.Virtualization returned error: not implemented yet
	I0915 01:28:35.082525   50620 notify.go:169] Checking for updates...
	I0915 01:28:35.088382   50620 config.go:177] Loaded profile config "download-only-20210915012818-22140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.14.0
	W0915 01:28:35.089008   50620 start.go:659] api.Load failed for download-only-20210915012818-22140: filestore "download-only-20210915012818-22140": Docker machine "download-only-20210915012818-22140" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0915 01:28:35.089008   50620 driver.go:343] Setting default libvirt URI to qemu:///system
	W0915 01:28:35.089953   50620 start.go:659] api.Load failed for download-only-20210915012818-22140: filestore "download-only-20210915012818-22140": Docker machine "download-only-20210915012818-22140" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0915 01:28:37.055927   50620 docker.go:132] docker version: linux-20.10.5
	I0915 01:28:37.071727   50620 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0915 01:28:38.043605   50620 info.go:263] docker info: {ID:6FWJ:GOIP:3UEJ:4EPV:BN5V:RES7:2PF6:QH5I:B3LP:YJP2:JQEM:LHWQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:47 SystemTime:2021-09-15 01:28:37.6200369 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0915 01:28:38.047125   50620 start.go:278] selected driver: docker
	I0915 01:28:38.047125   50620 start.go:751] validating driver "docker" against &{Name:download-only-20210915012818-22140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:15300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:download-only-20210915012818-22140 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0915 01:28:38.074922   50620 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0915 01:28:39.069475   50620 info.go:263] docker info: {ID:6FWJ:GOIP:3UEJ:4EPV:BN5V:RES7:2PF6:QH5I:B3LP:YJP2:JQEM:LHWQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:47 SystemTime:2021-09-15 01:28:38.6668779 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0915 01:28:39.142950   50620 cni.go:93] Creating CNI manager for ""
	I0915 01:28:39.142950   50620 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0915 01:28:39.142950   50620 start_flags.go:278] config:
	{Name:download-only-20210915012818-22140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:15300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.1 ClusterName:download-only-20210915012818-22140 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0915 01:28:39.146807   50620 cache.go:118] Beginning downloading kic base image for docker with docker
	I0915 01:28:39.148816   50620 preload.go:131] Checking if preload exists for k8s version v1.22.1 and runtime docker
	I0915 01:28:39.148816   50620 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 in local docker daemon
	I0915 01:28:39.168805   50620 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v12-v1.22.1-docker-overlay2-amd64.tar.lz4
	I0915 01:28:39.168805   50620 cache.go:57] Caching tarball of preloaded images
	I0915 01:28:39.169816   50620 preload.go:131] Checking if preload exists for k8s version v1.22.1 and runtime docker
	I0915 01:28:39.172816   50620 preload.go:237] getting checksum for preloaded-images-k8s-v12-v1.22.1-docker-overlay2-amd64.tar.lz4 ...
	I0915 01:28:39.201119   50620 download.go:92] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v12-v1.22.1-docker-overlay2-amd64.tar.lz4?checksum=md5:df04359146fc74639fed093942461742 -> C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v12-v1.22.1-docker-overlay2-amd64.tar.lz4
	I0915 01:28:39.768976   50620 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 to local cache
	I0915 01:28:39.768976   50620 localpath.go:146] windows sanitize: C:\Users\jenkins\minikube-integration\.minikube\cache\kic\kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56.tar -> C:\Users\jenkins\minikube-integration\.minikube\cache\kic\kicbase-builds_v0.0.26-1631295795-12425@sha256_7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56.tar
	I0915 01:28:39.768976   50620 localpath.go:146] windows sanitize: C:\Users\jenkins\minikube-integration\.minikube\cache\kic\kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56.tar -> C:\Users\jenkins\minikube-integration\.minikube\cache\kic\kicbase-builds_v0.0.26-1631295795-12425@sha256_7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56.tar
	I0915 01:28:39.768976   50620 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 in local cache directory
	I0915 01:28:39.769975   50620 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 in local cache directory, skipping pull
	I0915 01:28:39.769975   50620 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 exists in cache, skipping pull
	I0915 01:28:39.769975   50620 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 as a tarball
	I0915 01:28:44.303105   50620 preload.go:247] saving checksum for preloaded-images-k8s-v12-v1.22.1-docker-overlay2-amd64.tar.lz4 ...
	I0915 01:28:44.304102   50620 preload.go:254] verifying checksumm of C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v12-v1.22.1-docker-overlay2-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210915012818-22140"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.22.1/LogsDuration (0.58s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.2-rc.0/json-events (11.92s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.2-rc.0/json-events
aaa_download_only_test.go:70: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20210915012818-22140 --force --alsologtostderr --kubernetes-version=v1.22.2-rc.0 --container-runtime=docker --driver=docker
aaa_download_only_test.go:70: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20210915012818-22140 --force --alsologtostderr --kubernetes-version=v1.22.2-rc.0 --container-runtime=docker --driver=docker: (11.916736s)
--- PASS: TestDownloadOnly/v1.22.2-rc.0/json-events (11.92s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.2-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.2-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.22.2-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.2-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.2-rc.0/kubectl
--- PASS: TestDownloadOnly/v1.22.2-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.2-rc.0/LogsDuration (0.57s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.2-rc.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-20210915012818-22140
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-20210915012818-22140: exit status 85 (558.112ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/09/15 01:28:49
	Running on machine: windows-server-1
	Binary: Built with gc go1.17 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0915 01:28:49.842151   34104 out.go:298] Setting OutFile to fd 772 ...
	I0915 01:28:49.844147   34104 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0915 01:28:49.844147   34104 out.go:311] Setting ErrFile to fd 776...
	I0915 01:28:49.844147   34104 out.go:345] TERM=,COLORTERM=, which probably does not support color
	W0915 01:28:49.862919   34104 root.go:291] Error reading config file at C:\Users\jenkins\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins\minikube-integration\.minikube\config\config.json: The system cannot find the file specified.
	I0915 01:28:49.864273   34104 out.go:305] Setting JSON to true
	I0915 01:28:49.870301   34104 start.go:111] hostinfo: {"hostname":"windows-server-1","uptime":10272112,"bootTime":1621397217,"procs":150,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9879231f-6171-435d-bab4-5b366cc6391b"}
	W0915 01:28:49.870685   34104 start.go:119] gopshost.Virtualization returned error: not implemented yet
	I0915 01:28:49.875465   34104 notify.go:169] Checking for updates...
	I0915 01:28:49.879991   34104 config.go:177] Loaded profile config "download-only-20210915012818-22140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.1
	W0915 01:28:49.880374   34104 start.go:659] api.Load failed for download-only-20210915012818-22140: filestore "download-only-20210915012818-22140": Docker machine "download-only-20210915012818-22140" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0915 01:28:49.880806   34104 driver.go:343] Setting default libvirt URI to qemu:///system
	W0915 01:28:49.880806   34104 start.go:659] api.Load failed for download-only-20210915012818-22140: filestore "download-only-20210915012818-22140": Docker machine "download-only-20210915012818-22140" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0915 01:28:50.495348   34104 docker.go:132] docker version: linux-20.10.5
	I0915 01:28:50.508889   34104 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0915 01:28:51.471803   34104 info.go:263] docker info: {ID:6FWJ:GOIP:3UEJ:4EPV:BN5V:RES7:2PF6:QH5I:B3LP:YJP2:JQEM:LHWQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:47 SystemTime:2021-09-15 01:28:51.0422173 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0915 01:28:51.475211   34104 start.go:278] selected driver: docker
	I0915 01:28:51.475577   34104 start.go:751] validating driver "docker" against &{Name:download-only-20210915012818-22140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:15300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.1 ClusterName:download-only-20210915012818-22140 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.1 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0915 01:28:51.508714   34104 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0915 01:28:52.457434   34104 info.go:263] docker info: {ID:6FWJ:GOIP:3UEJ:4EPV:BN5V:RES7:2PF6:QH5I:B3LP:YJP2:JQEM:LHWQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:47 SystemTime:2021-09-15 01:28:52.0424246 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0915 01:28:52.542782   34104 cni.go:93] Creating CNI manager for ""
	I0915 01:28:52.542994   34104 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0915 01:28:52.542994   34104 start_flags.go:278] config:
	{Name:download-only-20210915012818-22140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:15300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.2-rc.0 ClusterName:download-only-20210915012818-22140 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.1 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0915 01:28:52.545436   34104 cache.go:118] Beginning downloading kic base image for docker with docker
	I0915 01:28:52.548289   34104 preload.go:131] Checking if preload exists for k8s version v1.22.2-rc.0 and runtime docker
	I0915 01:28:52.548289   34104 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 in local docker daemon
	I0915 01:28:52.569320   34104 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v12-v1.22.2-rc.0-docker-overlay2-amd64.tar.lz4
	I0915 01:28:52.569320   34104 cache.go:57] Caching tarball of preloaded images
	I0915 01:28:52.570307   34104 preload.go:131] Checking if preload exists for k8s version v1.22.2-rc.0 and runtime docker
	I0915 01:28:52.573290   34104 preload.go:237] getting checksum for preloaded-images-k8s-v12-v1.22.2-rc.0-docker-overlay2-amd64.tar.lz4 ...
	I0915 01:28:52.599896   34104 download.go:92] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v12-v1.22.2-rc.0-docker-overlay2-amd64.tar.lz4?checksum=md5:55e401cc9516bdfbac04c93d8ed559d4 -> C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v12-v1.22.2-rc.0-docker-overlay2-amd64.tar.lz4
	I0915 01:28:53.249883   34104 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 to local cache
	I0915 01:28:53.249883   34104 localpath.go:146] windows sanitize: C:\Users\jenkins\minikube-integration\.minikube\cache\kic\kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56.tar -> C:\Users\jenkins\minikube-integration\.minikube\cache\kic\kicbase-builds_v0.0.26-1631295795-12425@sha256_7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56.tar
	I0915 01:28:53.250484   34104 localpath.go:146] windows sanitize: C:\Users\jenkins\minikube-integration\.minikube\cache\kic\kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56.tar -> C:\Users\jenkins\minikube-integration\.minikube\cache\kic\kicbase-builds_v0.0.26-1631295795-12425@sha256_7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56.tar
	I0915 01:28:53.250484   34104 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 in local cache directory
	I0915 01:28:53.250884   34104 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 in local cache directory, skipping pull
	I0915 01:28:53.251070   34104 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 exists in cache, skipping pull
	I0915 01:28:53.251241   34104 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 as a tarball
	I0915 01:28:57.283275   34104 preload.go:247] saving checksum for preloaded-images-k8s-v12-v1.22.2-rc.0-docker-overlay2-amd64.tar.lz4 ...
	I0915 01:28:57.284795   34104 preload.go:254] verifying checksumm of C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v12-v1.22.2-rc.0-docker-overlay2-amd64.tar.lz4 ...
	I0915 01:28:59.374076   34104 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.2-rc.0 on docker
	I0915 01:28:59.374609   34104 profile.go:148] Saving config to C:\Users\jenkins\minikube-integration\.minikube\profiles\download-only-20210915012818-22140\config.json ...
	I0915 01:28:59.378436   34104 preload.go:131] Checking if preload exists for k8s version v1.22.2-rc.0 and runtime docker
	I0915 01:28:59.379356   34104 download.go:92] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.22.2-rc.0/bin/windows/amd64/kubectl.exe?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.22.2-rc.0/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins\minikube-integration\.minikube\cache\windows\v1.22.2-rc.0/kubectl.exe
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210915012818-22140"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.22.2-rc.0/LogsDuration (0.57s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (6.04s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:190: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (6.0397984s)
--- PASS: TestDownloadOnly/DeleteAll (6.04s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (3.62s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-20210915012818-22140
aaa_download_only_test.go:202: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-20210915012818-22140: (3.6161536s)
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (3.62s)

                                                
                                    
x
+
TestDownloadOnlyKic (44.52s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:227: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p download-docker-20210915012916-22140 --force --alsologtostderr --driver=docker
aaa_download_only_test.go:227: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p download-docker-20210915012916-22140 --force --alsologtostderr --driver=docker: (36.4545512s)
helpers_test.go:176: Cleaning up "download-docker-20210915012916-22140" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-docker-20210915012916-22140
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-docker-20210915012916-22140: (5.7693133s)
--- PASS: TestDownloadOnlyKic (44.52s)

                                                
                                    
x
+
TestOffline (662.91s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-20210915030944-22140 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:56: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-20210915030944-22140 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker: (10m35.2891197s)
helpers_test.go:176: Cleaning up "offline-docker-20210915030944-22140" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-20210915030944-22140
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-20210915030944-22140: (27.6216024s)
--- PASS: TestOffline (662.91s)

                                                
                                    
x
+
TestAddons/Setup (801.03s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-20210915013001-22140 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=olm --addons=volumesnapshots --addons=csi-hostpath-driver --driver=docker --addons=ingress --addons=helm-tiller
addons_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-20210915013001-22140 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=olm --addons=volumesnapshots --addons=csi-hostpath-driver --driver=docker --addons=ingress --addons=helm-tiller: (11m43.3780399s)
addons_test.go:89: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20210915013001-22140 addons enable gcp-auth
addons_test.go:89: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20210915013001-22140 addons enable gcp-auth: (20.6641875s)
addons_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20210915013001-22140 addons enable gcp-auth --force
addons_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20210915013001-22140 addons enable gcp-auth --force: (1m16.9453206s)
--- PASS: TestAddons/Setup (801.03s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (90.93s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:170: (dbg) Run:  kubectl --context addons-20210915013001-22140 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:177: (dbg) Run:  kubectl --context addons-20210915013001-22140 replace --force -f testdata\nginx-ingv1.yaml

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:177: (dbg) Done: kubectl --context addons-20210915013001-22140 replace --force -f testdata\nginx-ingv1.yaml: (3.5858084s)
addons_test.go:190: (dbg) Run:  kubectl --context addons-20210915013001-22140 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:190: (dbg) Done: kubectl --context addons-20210915013001-22140 replace --force -f testdata\nginx-pod-svc.yaml: (3.1129934s)
addons_test.go:195: (dbg) TestAddons/parallel/Ingress: waiting 4m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:343: "nginx" [20e0db99-7b53-4f2b-b473-6c1c7aa9fe29] Pending

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:343: "nginx" [20e0db99-7b53-4f2b-b473-6c1c7aa9fe29] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:343: "nginx" [20e0db99-7b53-4f2b-b473-6c1c7aa9fe29] Running
addons_test.go:195: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 1m19.1418117s
addons_test.go:215: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20210915013001-22140 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:215: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20210915013001-22140 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (4.0596111s)
addons_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20210915013001-22140 addons disable ingress --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Ingress (90.93s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (16.55s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:330: metrics-server stabilized in 133.6695ms

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:332: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
helpers_test.go:343: "metrics-server-77c99ccb96-4lv64" [f23d094d-9c6a-4b46-ad1b-b3c5ba988a94] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:332: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.1746288s
addons_test.go:338: (dbg) Run:  kubectl --context addons-20210915013001-22140 top pods -n kube-system

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:355: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20210915013001-22140 addons disable metrics-server --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:355: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20210915013001-22140 addons disable metrics-server --alsologtostderr -v=1: (10.5232622s)
--- PASS: TestAddons/parallel/MetricsServer (16.55s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (69.06s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:379: tiller-deploy stabilized in 158.8321ms

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:381: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
helpers_test.go:343: "tiller-deploy-7d9fb5c894-fwzdj" [c3138aac-5ee5-45cf-a5b2-28fb6403d38d] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:381: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.2489228s
addons_test.go:396: (dbg) Run:  kubectl --context addons-20210915013001-22140 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:396: (dbg) Done: kubectl --context addons-20210915013001-22140 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (55.868901s)
addons_test.go:401: kubectl --context addons-20210915013001-22140 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
Error attaching, falling back to logs: 
addons_test.go:413: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20210915013001-22140 addons disable helm-tiller --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:413: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20210915013001-22140 addons disable helm-tiller --alsologtostderr -v=1: (7.5898352s)
--- PASS: TestAddons/parallel/HelmTiller (69.06s)

                                                
                                    
x
+
TestAddons/parallel/Olm (432.09s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:425: (dbg) Run:  kubectl --context addons-20210915013001-22140 wait --for=condition=ready --namespace=olm pod --selector=app=catalog-operator --timeout=90s

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:428: catalog-operator stabilized in 861.5736ms
addons_test.go:430: (dbg) Run:  kubectl --context addons-20210915013001-22140 wait --for=condition=ready --namespace=olm pod --selector=app=olm-operator --timeout=90s
addons_test.go:433: olm-operator stabilized in 1.2656316s
addons_test.go:435: (dbg) Run:  kubectl --context addons-20210915013001-22140 wait --for=condition=ready --namespace=olm pod --selector=app=packageserver --timeout=90s
addons_test.go:438: packageserver stabilized in 1.8862929s
addons_test.go:440: (dbg) Run:  kubectl --context addons-20210915013001-22140 wait --for=condition=ready --namespace=olm pod --selector=olm.catalogSource=operatorhubio-catalog --timeout=90s
addons_test.go:443: operatorhubio-catalog stabilized in 2.3226599s
addons_test.go:446: (dbg) Run:  kubectl --context addons-20210915013001-22140 create -f testdata\etcd.yaml
addons_test.go:446: (dbg) Done: kubectl --context addons-20210915013001-22140 create -f testdata\etcd.yaml: (1.3008702s)
addons_test.go:453: (dbg) Run:  kubectl --context addons-20210915013001-22140 get csv -n my-etcd
addons_test.go:458: kubectl --context addons-20210915013001-22140 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:453: (dbg) Run:  kubectl --context addons-20210915013001-22140 get csv -n my-etcd
addons_test.go:458: kubectl --context addons-20210915013001-22140 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:453: (dbg) Run:  kubectl --context addons-20210915013001-22140 get csv -n my-etcd
addons_test.go:458: kubectl --context addons-20210915013001-22140 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:453: (dbg) Run:  kubectl --context addons-20210915013001-22140 get csv -n my-etcd
addons_test.go:458: kubectl --context addons-20210915013001-22140 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:453: (dbg) Run:  kubectl --context addons-20210915013001-22140 get csv -n my-etcd
addons_test.go:458: kubectl --context addons-20210915013001-22140 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:453: (dbg) Run:  kubectl --context addons-20210915013001-22140 get csv -n my-etcd
addons_test.go:458: kubectl --context addons-20210915013001-22140 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.
addons_test.go:453: (dbg) Run:  kubectl --context addons-20210915013001-22140 get csv -n my-etcd
addons_test.go:458: kubectl --context addons-20210915013001-22140 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:453: (dbg) Run:  kubectl --context addons-20210915013001-22140 get csv -n my-etcd

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:458: kubectl --context addons-20210915013001-22140 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:453: (dbg) Run:  kubectl --context addons-20210915013001-22140 get csv -n my-etcd
addons_test.go:458: kubectl --context addons-20210915013001-22140 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:453: (dbg) Run:  kubectl --context addons-20210915013001-22140 get csv -n my-etcd
addons_test.go:453: (dbg) Run:  kubectl --context addons-20210915013001-22140 get csv -n my-etcd
addons_test.go:453: (dbg) Run:  kubectl --context addons-20210915013001-22140 get csv -n my-etcd

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:453: (dbg) Run:  kubectl --context addons-20210915013001-22140 get csv -n my-etcd
addons_test.go:453: (dbg) Run:  kubectl --context addons-20210915013001-22140 get csv -n my-etcd

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:453: (dbg) Run:  kubectl --context addons-20210915013001-22140 get csv -n my-etcd
--- PASS: TestAddons/parallel/Olm (432.09s)

                                                
                                    
x
+
TestAddons/parallel/CSI (261.93s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:484: csi-hostpath-driver pods stabilized in 8.2453202s
addons_test.go:487: (dbg) Run:  kubectl --context addons-20210915013001-22140 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:487: (dbg) Done: kubectl --context addons-20210915013001-22140 create -f testdata\csi-hostpath-driver\pvc.yaml: (1.2398293s)
addons_test.go:492: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20210915013001-22140 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20210915013001-22140 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20210915013001-22140 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20210915013001-22140 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20210915013001-22140 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20210915013001-22140 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20210915013001-22140 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20210915013001-22140 get pvc hpvc -o jsonpath={.status.phase} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:497: (dbg) Run:  kubectl --context addons-20210915013001-22140 create -f testdata\csi-hostpath-driver\pv-pod.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:497: (dbg) Done: kubectl --context addons-20210915013001-22140 create -f testdata\csi-hostpath-driver\pv-pod.yaml: (1.2358391s)
addons_test.go:502: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:343: "task-pv-pod" [d4bf4160-de97-4438-a1dc-7154dae9c2a7] Pending
helpers_test.go:343: "task-pv-pod" [d4bf4160-de97-4438-a1dc-7154dae9c2a7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:343: "task-pv-pod" [d4bf4160-de97-4438-a1dc-7154dae9c2a7] Running
addons_test.go:502: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 1m9.1046122s
addons_test.go:507: (dbg) Run:  kubectl --context addons-20210915013001-22140 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:507: (dbg) Done: kubectl --context addons-20210915013001-22140 create -f testdata\csi-hostpath-driver\snapshot.yaml: (1.1815368s)
addons_test.go:512: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:418: (dbg) Run:  kubectl --context addons-20210915013001-22140 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:426: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:418: (dbg) Run:  kubectl --context addons-20210915013001-22140 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:517: (dbg) Run:  kubectl --context addons-20210915013001-22140 delete pod task-pv-pod
addons_test.go:517: (dbg) Done: kubectl --context addons-20210915013001-22140 delete pod task-pv-pod: (4.3953237s)
addons_test.go:523: (dbg) Run:  kubectl --context addons-20210915013001-22140 delete pvc hpvc
addons_test.go:529: (dbg) Run:  kubectl --context addons-20210915013001-22140 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:529: (dbg) Done: kubectl --context addons-20210915013001-22140 create -f testdata\csi-hostpath-driver\pvc-restore.yaml: (1.2013615s)
addons_test.go:534: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20210915013001-22140 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20210915013001-22140 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:539: (dbg) Run:  kubectl --context addons-20210915013001-22140 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:539: (dbg) Done: kubectl --context addons-20210915013001-22140 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml: (1.2590501s)
addons_test.go:544: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:343: "task-pv-pod-restore" [deef1805-6d9b-4032-9318-96b790a2432a] Pending
helpers_test.go:343: "task-pv-pod-restore" [deef1805-6d9b-4032-9318-96b790a2432a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:343: "task-pv-pod-restore" [deef1805-6d9b-4032-9318-96b790a2432a] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:544: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 2m11.0887531s
addons_test.go:549: (dbg) Run:  kubectl --context addons-20210915013001-22140 delete pod task-pv-pod-restore

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:549: (dbg) Done: kubectl --context addons-20210915013001-22140 delete pod task-pv-pod-restore: (15.2945595s)
addons_test.go:553: (dbg) Run:  kubectl --context addons-20210915013001-22140 delete pvc hpvc-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-20210915013001-22140 delete volumesnapshot new-snapshot-demo
addons_test.go:557: (dbg) Done: kubectl --context addons-20210915013001-22140 delete volumesnapshot new-snapshot-demo: (1.2309199s)
addons_test.go:561: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20210915013001-22140 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:565: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20210915013001-22140 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:565: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20210915013001-22140 addons disable volumesnapshots --alsologtostderr -v=1: (13.4896414s)
--- PASS: TestAddons/parallel/CSI (261.93s)

                                                
                                    
x
+
TestAddons/parallel/GCPAuth (351.14s)

                                                
                                                
=== RUN   TestAddons/parallel/GCPAuth
=== PAUSE TestAddons/parallel/GCPAuth

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:576: (dbg) Run:  kubectl --context addons-20210915013001-22140 create -f testdata\busybox.yaml
addons_test.go:576: (dbg) Done: kubectl --context addons-20210915013001-22140 create -f testdata\busybox.yaml: (1.4906916s)
addons_test.go:582: (dbg) TestAddons/parallel/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [773b3fe6-3f0c-4cc2-96f3-b6410823da01] Pending

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:343: "busybox" [773b3fe6-3f0c-4cc2-96f3-b6410823da01] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:343: "busybox" [773b3fe6-3f0c-4cc2-96f3-b6410823da01] Running
addons_test.go:582: (dbg) TestAddons/parallel/GCPAuth: integration-test=busybox healthy within 46.0966501s
addons_test.go:588: (dbg) Run:  kubectl --context addons-20210915013001-22140 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:588: (dbg) Done: kubectl --context addons-20210915013001-22140 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS": (2.4098672s)
addons_test.go:625: (dbg) Run:  kubectl --context addons-20210915013001-22140 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:625: (dbg) Done: kubectl --context addons-20210915013001-22140 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT": (1.5446612s)
addons_test.go:641: (dbg) Run:  kubectl --context addons-20210915013001-22140 apply -f testdata\private-image.yaml
addons_test.go:648: (dbg) TestAddons/parallel/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image" in namespace "default" ...
helpers_test.go:343: "private-image-7ff9c8c74f-gw2z2" [47f99f42-93a7-4a57-822e-7c7868a6cf6e] Pending / Ready:ContainersNotReady (containers with unready status: [private-image]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image])

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:343: "private-image-7ff9c8c74f-gw2z2" [47f99f42-93a7-4a57-822e-7c7868a6cf6e] Running

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:648: (dbg) TestAddons/parallel/GCPAuth: integration-test=private-image healthy within 2m59.1566154s
addons_test.go:654: (dbg) Run:  kubectl --context addons-20210915013001-22140 apply -f testdata\private-image-eu.yaml

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:654: (dbg) Done: kubectl --context addons-20210915013001-22140 apply -f testdata\private-image-eu.yaml: (1.586665s)
addons_test.go:661: (dbg) TestAddons/parallel/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image-eu" in namespace "default" ...
helpers_test.go:343: "private-image-eu-5956d58f9f-9vhmc" [19294a80-b42c-4bda-bd87-432d2a6e09e4] Pending
helpers_test.go:343: "private-image-eu-5956d58f9f-9vhmc" [19294a80-b42c-4bda-bd87-432d2a6e09e4] Pending / Ready:ContainersNotReady (containers with unready status: [private-image-eu]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image-eu])

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:343: "private-image-eu-5956d58f9f-9vhmc" [19294a80-b42c-4bda-bd87-432d2a6e09e4] Running
addons_test.go:661: (dbg) TestAddons/parallel/GCPAuth: integration-test=private-image-eu healthy within 1m51.1530898s
addons_test.go:667: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20210915013001-22140 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:667: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20210915013001-22140 addons disable gcp-auth --alsologtostderr -v=1: (6.5436063s)
--- PASS: TestAddons/parallel/GCPAuth (351.14s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (30.19s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-20210915013001-22140
addons_test.go:140: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-20210915013001-22140: (27.9448333s)
addons_test.go:144: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-20210915013001-22140
addons_test.go:144: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p addons-20210915013001-22140: (1.1233002s)
addons_test.go:148: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-20210915013001-22140
addons_test.go:148: (dbg) Done: out/minikube-windows-amd64.exe addons disable dashboard -p addons-20210915013001-22140: (1.1225428s)
--- PASS: TestAddons/StoppedEnableDisable (30.19s)

                                                
                                    
x
+
TestDockerFlags (534.27s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:46: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-20210915032727-22140 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker
E0915 03:28:21.199705   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915015618-22140\client.crt: The system cannot find the path specified.
E0915 03:28:22.128502   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915013001-22140\client.crt: The system cannot find the path specified.
E0915 03:28:23.730843   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210915030334-22140\client.crt: The system cannot find the path specified.
E0915 03:29:46.816680   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210915030334-22140\client.crt: The system cannot find the path specified.
E0915 03:33:05.222251   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915013001-22140\client.crt: The system cannot find the path specified.
E0915 03:33:21.197235   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915015618-22140\client.crt: The system cannot find the path specified.
E0915 03:33:22.128288   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915013001-22140\client.crt: The system cannot find the path specified.
E0915 03:33:23.730264   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210915030334-22140\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:46: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-20210915032727-22140 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker: (8m15.0535773s)
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-20210915032727-22140 ssh "sudo systemctl show docker --property=Environment --no-pager"

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-20210915032727-22140 ssh "sudo systemctl show docker --property=Environment --no-pager": (7.0997394s)
docker_test.go:62: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-20210915032727-22140 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:62: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-20210915032727-22140 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (5.159644s)
helpers_test.go:176: Cleaning up "docker-flags-20210915032727-22140" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-20210915032727-22140

                                                
                                                
=== CONT  TestDockerFlags
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-20210915032727-22140: (26.9491581s)
--- PASS: TestDockerFlags (534.27s)

                                                
                                    
x
+
TestForceSystemdFlag (400.31s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:86: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-20210915032047-22140 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:86: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-20210915032047-22140 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker: (6m0.8289547s)
docker_test.go:103: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-20210915032047-22140 ssh "docker info --format {{.CgroupDriver}}"

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:103: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-20210915032047-22140 ssh "docker info --format {{.CgroupDriver}}": (6.5856747s)
helpers_test.go:176: Cleaning up "force-systemd-flag-20210915032047-22140" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-20210915032047-22140

                                                
                                                
=== CONT  TestForceSystemdFlag
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-20210915032047-22140: (32.896482s)
--- PASS: TestForceSystemdFlag (400.31s)

                                                
                                    
x
+
TestForceSystemdEnv (569.46s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-20210915032650-22140 --memory=2048 --alsologtostderr -v=5 --driver=docker

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-20210915032650-22140 --memory=2048 --alsologtostderr -v=5 --driver=docker: (8m46.5994907s)
docker_test.go:103: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-20210915032650-22140 ssh "docker info --format {{.CgroupDriver}}"

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:103: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-20210915032650-22140 ssh "docker info --format {{.CgroupDriver}}": (10.0319258s)
helpers_test.go:176: Cleaning up "force-systemd-env-20210915032650-22140" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-20210915032650-22140

                                                
                                                
=== CONT  TestForceSystemdEnv
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-20210915032650-22140: (32.830061s)
--- PASS: TestForceSystemdEnv (569.46s)

                                                
                                    
x
+
TestErrorSpam/setup (197.12s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:79: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-20210915015122-22140 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins\AppData\Local\Temp\nospam-20210915015122-22140 --driver=docker
E0915 01:53:22.153387   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915013001-22140\client.crt: The system cannot find the path specified.
E0915 01:53:22.175926   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915013001-22140\client.crt: The system cannot find the path specified.
E0915 01:53:22.186759   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915013001-22140\client.crt: The system cannot find the path specified.
E0915 01:53:22.208042   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915013001-22140\client.crt: The system cannot find the path specified.
E0915 01:53:22.249495   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915013001-22140\client.crt: The system cannot find the path specified.
E0915 01:53:22.330547   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915013001-22140\client.crt: The system cannot find the path specified.
E0915 01:53:22.491605   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915013001-22140\client.crt: The system cannot find the path specified.
E0915 01:53:22.812470   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915013001-22140\client.crt: The system cannot find the path specified.
E0915 01:53:23.453153   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915013001-22140\client.crt: The system cannot find the path specified.
E0915 01:53:24.735224   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915013001-22140\client.crt: The system cannot find the path specified.
E0915 01:53:27.295584   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915013001-22140\client.crt: The system cannot find the path specified.
E0915 01:53:32.416482   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915013001-22140\client.crt: The system cannot find the path specified.
E0915 01:53:42.657493   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915013001-22140\client.crt: The system cannot find the path specified.
E0915 01:54:03.137870   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915013001-22140\client.crt: The system cannot find the path specified.
error_spam_test.go:79: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-20210915015122-22140 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins\AppData\Local\Temp\nospam-20210915015122-22140 --driver=docker: (3m17.1215198s)
error_spam_test.go:89: acceptable stderr: "! C:\\Program Files\\Docker\\Docker\\resources\\bin\\kubectl.exe is version 1.20.0, which may have incompatibilites with Kubernetes 1.22.1."
--- PASS: TestErrorSpam/setup (197.12s)

                                                
                                    
x
+
TestErrorSpam/start (13.01s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:214: Cleaning up 1 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20210915015122-22140 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210915015122-22140 start --dry-run
E0915 01:54:44.100208   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915013001-22140\client.crt: The system cannot find the path specified.
error_spam_test.go:157: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20210915015122-22140 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210915015122-22140 start --dry-run: (5.014424s)
error_spam_test.go:157: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20210915015122-22140 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210915015122-22140 start --dry-run
error_spam_test.go:157: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20210915015122-22140 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210915015122-22140 start --dry-run: (3.4091794s)
error_spam_test.go:180: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20210915015122-22140 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210915015122-22140 start --dry-run
error_spam_test.go:180: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20210915015122-22140 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210915015122-22140 start --dry-run: (4.5752493s)
--- PASS: TestErrorSpam/start (13.01s)

                                                
                                    
x
+
TestErrorSpam/status (14.02s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20210915015122-22140 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210915015122-22140 status
error_spam_test.go:157: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20210915015122-22140 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210915015122-22140 status: (3.8442331s)
error_spam_test.go:157: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20210915015122-22140 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210915015122-22140 status
error_spam_test.go:157: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20210915015122-22140 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210915015122-22140 status: (5.0959164s)
error_spam_test.go:180: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20210915015122-22140 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210915015122-22140 status
error_spam_test.go:180: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20210915015122-22140 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210915015122-22140 status: (5.0679126s)
--- PASS: TestErrorSpam/status (14.02s)

                                                
                                    
x
+
TestErrorSpam/pause (14.52s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20210915015122-22140 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210915015122-22140 pause
error_spam_test.go:157: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20210915015122-22140 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210915015122-22140 pause: (5.7430158s)
error_spam_test.go:157: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20210915015122-22140 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210915015122-22140 pause
error_spam_test.go:157: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20210915015122-22140 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210915015122-22140 pause: (4.3609594s)
error_spam_test.go:180: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20210915015122-22140 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210915015122-22140 pause
error_spam_test.go:180: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20210915015122-22140 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210915015122-22140 pause: (4.4143373s)
--- PASS: TestErrorSpam/pause (14.52s)

                                                
                                    
x
+
TestErrorSpam/unpause (14.93s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20210915015122-22140 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210915015122-22140 unpause
error_spam_test.go:157: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20210915015122-22140 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210915015122-22140 unpause: (5.4940388s)
error_spam_test.go:157: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20210915015122-22140 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210915015122-22140 unpause
error_spam_test.go:157: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20210915015122-22140 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210915015122-22140 unpause: (4.7865706s)
error_spam_test.go:180: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20210915015122-22140 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210915015122-22140 unpause
error_spam_test.go:180: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20210915015122-22140 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210915015122-22140 unpause: (4.6433252s)
--- PASS: TestErrorSpam/unpause (14.93s)

                                                
                                    
x
+
TestErrorSpam/stop (28.91s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20210915015122-22140 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210915015122-22140 stop
error_spam_test.go:157: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20210915015122-22140 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210915015122-22140 stop: (18.291741s)
error_spam_test.go:157: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20210915015122-22140 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210915015122-22140 stop
error_spam_test.go:157: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20210915015122-22140 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210915015122-22140 stop: (5.3344378s)
error_spam_test.go:180: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20210915015122-22140 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210915015122-22140 stop
error_spam_test.go:180: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20210915015122-22140 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210915015122-22140 stop: (5.2772403s)
--- PASS: TestErrorSpam/stop (28.91s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1726: local sync path: C:\Users\jenkins\minikube-integration\.minikube\files\etc\test\nested\copy\22140\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.05s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (209.25s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2102: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20210915015618-22140 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker
E0915 01:58:22.148911   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915013001-22140\client.crt: The system cannot find the path specified.
E0915 01:58:49.865564   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915013001-22140\client.crt: The system cannot find the path specified.
functional_test.go:2102: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-20210915015618-22140 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker: (3m29.2477273s)
--- PASS: TestFunctional/serial/StartWithProxy (209.25s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.83s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:747: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20210915015618-22140 --alsologtostderr -v=8
functional_test.go:747: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-20210915015618-22140 --alsologtostderr -v=8: (29.822227s)
functional_test.go:751: soft start took 29.8270106s for "functional-20210915015618-22140" cluster.
--- PASS: TestFunctional/serial/SoftStart (29.83s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.2s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:767: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.20s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.49s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:780: (dbg) Run:  kubectl --context functional-20210915015618-22140 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.49s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (15.42s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1102: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915015618-22140 cache add k8s.gcr.io/pause:3.1
functional_test.go:1102: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915015618-22140 cache add k8s.gcr.io/pause:3.1: (4.4600282s)
functional_test.go:1102: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915015618-22140 cache add k8s.gcr.io/pause:3.3
functional_test.go:1102: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915015618-22140 cache add k8s.gcr.io/pause:3.3: (6.1540681s)
functional_test.go:1102: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915015618-22140 cache add k8s.gcr.io/pause:latest
functional_test.go:1102: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915015618-22140 cache add k8s.gcr.io/pause:latest: (4.8010948s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (15.42s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (8.02s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1132: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20210915015618-22140 C:\Users\jenkins\AppData\Local\Temp\functional-20210915015618-221402198086587
functional_test.go:1132: (dbg) Done: docker build -t minikube-local-cache-test:functional-20210915015618-22140 C:\Users\jenkins\AppData\Local\Temp\functional-20210915015618-221402198086587: (2.5244623s)
functional_test.go:1144: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915015618-22140 cache add minikube-local-cache-test:functional-20210915015618-22140
functional_test.go:1144: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915015618-22140 cache add minikube-local-cache-test:functional-20210915015618-22140: (4.2787317s)
functional_test.go:1149: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915015618-22140 cache delete minikube-local-cache-test:functional-20210915015618-22140
functional_test.go:1138: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20210915015618-22140
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (8.02s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.53s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1156: (dbg) Run:  out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.53s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.44s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1163: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.44s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (4.93s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1176: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915015618-22140 ssh sudo crictl images
functional_test.go:1176: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915015618-22140 ssh sudo crictl images: (4.9296442s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (4.93s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (16.46s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1198: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915015618-22140 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1198: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915015618-22140 ssh sudo docker rmi k8s.gcr.io/pause:latest: (3.6225686s)
functional_test.go:1204: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915015618-22140 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1204: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20210915015618-22140 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (4.7536378s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	! Executing "docker container inspect functional-20210915015618-22140 --format={{.State.Status}}" took an unusually long time: 2.0092047s
	* Restarting the docker service may improve performance.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1209: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915015618-22140 cache reload
functional_test.go:1209: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915015618-22140 cache reload: (4.6524721s)
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915015618-22140 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1214: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915015618-22140 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: (3.4264791s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (16.46s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.95s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1223: (dbg) Run:  out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:3.1
functional_test.go:1223: (dbg) Run:  out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.95s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:798: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915015618-22140 kubectl -- --context functional-20210915015618-22140 get pods
functional_test.go:798: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915015618-22140 kubectl -- --context functional-20210915015618-22140 get pods: (1.3377111s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (1.34s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (2.04s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:821: (dbg) Run:  out\kubectl.exe --context functional-20210915015618-22140 get pods
functional_test.go:821: (dbg) Done: out\kubectl.exe --context functional-20210915015618-22140 get pods: (2.0355356s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (2.04s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (114.52s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:835: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20210915015618-22140 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:835: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-20210915015618-22140 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m54.5184792s)
functional_test.go:839: restart took 1m54.519292s for "functional-20210915015618-22140" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (114.52s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.34s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:886: (dbg) Run:  kubectl --context functional-20210915015618-22140 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:900: etcd phase: Running
functional_test.go:910: etcd status: Ready
functional_test.go:900: kube-apiserver phase: Running
functional_test.go:910: kube-apiserver status: Ready
functional_test.go:900: kube-controller-manager phase: Running
functional_test.go:910: kube-controller-manager status: Ready
functional_test.go:900: kube-scheduler phase: Running
functional_test.go:910: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.34s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (7.74s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1285: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915015618-22140 logs
functional_test.go:1285: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915015618-22140 logs: (7.7440519s)
--- PASS: TestFunctional/serial/LogsCmd (7.74s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (8.97s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1301: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915015618-22140 logs --file C:\Users\jenkins\AppData\Local\Temp\functional-20210915015618-221402117463029\logs.txt
functional_test.go:1301: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915015618-22140 logs --file C:\Users\jenkins\AppData\Local\Temp\functional-20210915015618-221402117463029\logs.txt: (8.9653675s)
--- PASS: TestFunctional/serial/LogsFileCmd (8.97s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (3.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1249: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915015618-22140 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1249: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915015618-22140 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1249: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20210915015618-22140 config get cpus: exit status 14 (536.0802ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1249: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915015618-22140 config set cpus 2

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1249: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915015618-22140 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1249: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915015618-22140 config unset cpus
functional_test.go:1249: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915015618-22140 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1249: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20210915015618-22140 config get cpus: exit status 14 (592.0523ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (3.32s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (6.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:1039: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20210915015618-22140 --dry-run --memory 250MB --alsologtostderr --driver=docker
functional_test.go:1039: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-20210915015618-22140 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (2.6596702s)

                                                
                                                
-- stdout --
	* [functional-20210915015618-22140] minikube v1.23.0 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	  - KUBECONFIG=C:\Users\jenkins\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12425
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 02:03:37.151348   14484 out.go:298] Setting OutFile to fd 1488 ...
	I0915 02:03:37.154374   14484 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0915 02:03:37.154374   14484 out.go:311] Setting ErrFile to fd 1492...
	I0915 02:03:37.154374   14484 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0915 02:03:37.183351   14484 out.go:305] Setting JSON to false
	I0915 02:03:37.188348   14484 start.go:111] hostinfo: {"hostname":"windows-server-1","uptime":10274199,"bootTime":1621397218,"procs":154,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9879231f-6171-435d-bab4-5b366cc6391b"}
	W0915 02:03:37.188348   14484 start.go:119] gopshost.Virtualization returned error: not implemented yet
	I0915 02:03:37.193357   14484 out.go:177] * [functional-20210915015618-22140] minikube v1.23.0 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	I0915 02:03:37.196401   14484 out.go:177]   - KUBECONFIG=C:\Users\jenkins\minikube-integration\kubeconfig
	I0915 02:03:37.199370   14484 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins\minikube-integration\.minikube
	I0915 02:03:37.201433   14484 out.go:177]   - MINIKUBE_LOCATION=12425
	I0915 02:03:37.201433   14484 config.go:177] Loaded profile config "functional-20210915015618-22140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.1
	I0915 02:03:37.205029   14484 driver.go:343] Setting default libvirt URI to qemu:///system
	I0915 02:03:38.007627   14484 docker.go:132] docker version: linux-20.10.5
	I0915 02:03:38.020418   14484 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0915 02:03:39.181105   14484 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.1606909s)
	I0915 02:03:39.183300   14484 info.go:263] docker info: {ID:6FWJ:GOIP:3UEJ:4EPV:BN5V:RES7:2PF6:QH5I:B3LP:YJP2:JQEM:LHWQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:53 SystemTime:2021-09-15 02:03:38.6397038 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0915 02:03:39.194683   14484 out.go:177] * Using the docker driver based on existing profile
	I0915 02:03:39.197555   14484 start.go:278] selected driver: docker
	I0915 02:03:39.198382   14484 start.go:751] validating driver "docker" against &{Name:functional-20210915015618-22140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.1 ClusterName:functional-20210915015618-22140 Namespace:default APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.22.1 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-prov
isioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0915 02:03:39.199208   14484 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0915 02:03:39.307262   14484 out.go:177] 
	W0915 02:03:39.308262   14484 out.go:242] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0915 02:03:39.311268   14484 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:1054: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20210915015618-22140 --dry-run --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:1054: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-20210915015618-22140 --dry-run --alsologtostderr -v=1 --driver=docker: (4.1190521s)
--- PASS: TestFunctional/parallel/DryRun (6.78s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (2.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1076: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20210915015618-22140 --dry-run --memory 250MB --alsologtostderr --driver=docker

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1076: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-20210915015618-22140 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (2.7733058s)

                                                
                                                
-- stdout --
	* [functional-20210915015618-22140] minikube v1.23.0 sur Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	  - KUBECONFIG=C:\Users\jenkins\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12425
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 02:03:34.365182   31024 out.go:298] Setting OutFile to fd 1444 ...
	I0915 02:03:34.366916   31024 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0915 02:03:34.366916   31024 out.go:311] Setting ErrFile to fd 1452...
	I0915 02:03:34.366916   31024 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0915 02:03:34.387476   31024 out.go:305] Setting JSON to false
	I0915 02:03:34.395906   31024 start.go:111] hostinfo: {"hostname":"windows-server-1","uptime":10274197,"bootTime":1621397217,"procs":154,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9879231f-6171-435d-bab4-5b366cc6391b"}
	W0915 02:03:34.396332   31024 start.go:119] gopshost.Virtualization returned error: not implemented yet
	I0915 02:03:34.401600   31024 out.go:177] * [functional-20210915015618-22140] minikube v1.23.0 sur Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	I0915 02:03:34.405409   31024 out.go:177]   - KUBECONFIG=C:\Users\jenkins\minikube-integration\kubeconfig
	I0915 02:03:34.407956   31024 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins\minikube-integration\.minikube
	I0915 02:03:34.411924   31024 out.go:177]   - MINIKUBE_LOCATION=12425
	I0915 02:03:34.424917   31024 config.go:177] Loaded profile config "functional-20210915015618-22140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.1
	I0915 02:03:34.434960   31024 driver.go:343] Setting default libvirt URI to qemu:///system
	I0915 02:03:35.135847   31024 docker.go:132] docker version: linux-20.10.5
	I0915 02:03:35.157228   31024 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0915 02:03:36.526303   31024 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.3690793s)
	I0915 02:03:36.527341   31024 info.go:263] docker info: {ID:6FWJ:GOIP:3UEJ:4EPV:BN5V:RES7:2PF6:QH5I:B3LP:YJP2:JQEM:LHWQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:53 SystemTime:2021-09-15 02:03:35.8682552 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0915 02:03:36.532365   31024 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0915 02:03:36.532701   31024 start.go:278] selected driver: docker
	I0915 02:03:36.532874   31024 start.go:751] validating driver "docker" against &{Name:functional-20210915015618-22140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.1 ClusterName:functional-20210915015618-22140 Namespace:default APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.22.1 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-prov
isioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0915 02:03:36.534028   31024 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0915 02:03:36.648828   31024 out.go:177] 
	W0915 02:03:36.649590   31024 out.go:242] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0915 02:03:36.652931   31024 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (2.77s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (13.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:929: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915015618-22140 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:929: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915015618-22140 status: (4.5224592s)
functional_test.go:935: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915015618-22140 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:935: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915015618-22140 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (5.1042638s)
functional_test.go:946: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915015618-22140 status -o json

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:946: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915015618-22140 status -o json: (4.2355571s)
--- PASS: TestFunctional/parallel/StatusCmd (13.86s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1585: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915015618-22140 addons list

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1585: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915015618-22140 addons list: (1.3716398s)
functional_test.go:1596: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915015618-22140 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (1.90s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (64.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:343: "storage-provisioner" [9b1d6f62-e3c7-459d-89db-11acc487028f] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.0500589s
functional_test_pvc_test.go:50: (dbg) Run:  kubectl --context functional-20210915015618-22140 get storageclass -o=json
functional_test_pvc_test.go:70: (dbg) Run:  kubectl --context functional-20210915015618-22140 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:70: (dbg) Done: kubectl --context functional-20210915015618-22140 apply -f testdata/storage-provisioner/pvc.yaml: (1.6947515s)
functional_test_pvc_test.go:77: (dbg) Run:  kubectl --context functional-20210915015618-22140 get pvc myclaim -o=json

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:77: (dbg) Run:  kubectl --context functional-20210915015618-22140 get pvc myclaim -o=json
functional_test_pvc_test.go:126: (dbg) Run:  kubectl --context functional-20210915015618-22140 apply -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:343: "sp-pod" [010c2c24-efec-4a2a-98eb-9c64b64c6197] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:343: "sp-pod" [010c2c24-efec-4a2a-98eb-9c64b64c6197] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:343: "sp-pod" [010c2c24-efec-4a2a-98eb-9c64b64c6197] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 34.0853239s
functional_test_pvc_test.go:101: (dbg) Run:  kubectl --context functional-20210915015618-22140 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:101: (dbg) Done: kubectl --context functional-20210915015618-22140 exec sp-pod -- touch /tmp/mount/foo: (1.1105592s)
functional_test_pvc_test.go:107: (dbg) Run:  kubectl --context functional-20210915015618-22140 delete -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:107: (dbg) Done: kubectl --context functional-20210915015618-22140 delete -f testdata/storage-provisioner/pod.yaml: (2.9783699s)
functional_test_pvc_test.go:126: (dbg) Run:  kubectl --context functional-20210915015618-22140 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:343: "sp-pod" [5c15ac30-7dc0-4ad0-a1f0-138f33cec851] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:343: "sp-pod" [5c15ac30-7dc0-4ad0-a1f0-138f33cec851] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:343: "sp-pod" [5c15ac30-7dc0-4ad0-a1f0-138f33cec851] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.0837368s
functional_test_pvc_test.go:115: (dbg) Run:  kubectl --context functional-20210915015618-22140 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (64.24s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (8.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1618: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915015618-22140 ssh "echo hello"
functional_test.go:1618: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915015618-22140 ssh "echo hello": (3.7425625s)
functional_test.go:1635: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915015618-22140 ssh "cat /etc/hostname"
functional_test.go:1635: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915015618-22140 ssh "cat /etc/hostname": (5.0921517s)
--- PASS: TestFunctional/parallel/SSHCmd (8.84s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (10.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:535: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915015618-22140 cp testdata\cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:535: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915015618-22140 cp testdata\cp-test.txt /home/docker/cp-test.txt: (5.3543163s)
helpers_test.go:549: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915015618-22140 ssh "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:549: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915015618-22140 ssh "sudo cat /home/docker/cp-test.txt": (5.0817251s)
--- PASS: TestFunctional/parallel/CpCmd (10.44s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (128.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1666: (dbg) Run:  kubectl --context functional-20210915015618-22140 replace --force -f testdata\mysql.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1666: (dbg) Done: kubectl --context functional-20210915015618-22140 replace --force -f testdata\mysql.yaml: (1.1654295s)
functional_test.go:1671: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:343: "mysql-9bbbc5bbb-fk78l" [574d0def-aef2-463f-94bc-969a7cb0d8d4] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:343: "mysql-9bbbc5bbb-fk78l" [574d0def-aef2-463f-94bc-969a7cb0d8d4] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1671: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 1m34.126189s
functional_test.go:1678: (dbg) Run:  kubectl --context functional-20210915015618-22140 exec mysql-9bbbc5bbb-fk78l -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1678: (dbg) Non-zero exit: kubectl --context functional-20210915015618-22140 exec mysql-9bbbc5bbb-fk78l -- mysql -ppassword -e "show databases;": exit status 1 (1.5437461s)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1678: (dbg) Run:  kubectl --context functional-20210915015618-22140 exec mysql-9bbbc5bbb-fk78l -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1678: (dbg) Non-zero exit: kubectl --context functional-20210915015618-22140 exec mysql-9bbbc5bbb-fk78l -- mysql -ppassword -e "show databases;": exit status 1 (1.5644441s)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1678: (dbg) Run:  kubectl --context functional-20210915015618-22140 exec mysql-9bbbc5bbb-fk78l -- mysql -ppassword -e "show databases;"
functional_test.go:1678: (dbg) Non-zero exit: kubectl --context functional-20210915015618-22140 exec mysql-9bbbc5bbb-fk78l -- mysql -ppassword -e "show databases;": exit status 1 (1.339805s)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1678: (dbg) Run:  kubectl --context functional-20210915015618-22140 exec mysql-9bbbc5bbb-fk78l -- mysql -ppassword -e "show databases;"
functional_test.go:1678: (dbg) Non-zero exit: kubectl --context functional-20210915015618-22140 exec mysql-9bbbc5bbb-fk78l -- mysql -ppassword -e "show databases;": exit status 1 (1.7900923s)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1678: (dbg) Run:  kubectl --context functional-20210915015618-22140 exec mysql-9bbbc5bbb-fk78l -- mysql -ppassword -e "show databases;"
functional_test.go:1678: (dbg) Non-zero exit: kubectl --context functional-20210915015618-22140 exec mysql-9bbbc5bbb-fk78l -- mysql -ppassword -e "show databases;": exit status 1 (1.5300551s)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1678: (dbg) Run:  kubectl --context functional-20210915015618-22140 exec mysql-9bbbc5bbb-fk78l -- mysql -ppassword -e "show databases;"
functional_test.go:1678: (dbg) Non-zero exit: kubectl --context functional-20210915015618-22140 exec mysql-9bbbc5bbb-fk78l -- mysql -ppassword -e "show databases;": exit status 1 (2.310012s)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1053 (08S01) at line 1: Server shutdown in progress
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1678: (dbg) Run:  kubectl --context functional-20210915015618-22140 exec mysql-9bbbc5bbb-fk78l -- mysql -ppassword -e "show databases;"
functional_test.go:1678: (dbg) Done: kubectl --context functional-20210915015618-22140 exec mysql-9bbbc5bbb-fk78l -- mysql -ppassword -e "show databases;": (1.5133539s)
--- PASS: TestFunctional/parallel/MySQL (128.39s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (5.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1798: Checking for existence of /etc/test/nested/copy/22140/hosts within VM
functional_test.go:1799: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915015618-22140 ssh "sudo cat /etc/test/nested/copy/22140/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1799: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915015618-22140 ssh "sudo cat /etc/test/nested/copy/22140/hosts": (5.4560652s)
functional_test.go:1804: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (5.46s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (27.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1839: Checking for existence of /etc/ssl/certs/22140.pem within VM
functional_test.go:1840: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915015618-22140 ssh "sudo cat /etc/ssl/certs/22140.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1840: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915015618-22140 ssh "sudo cat /etc/ssl/certs/22140.pem": (4.0553939s)
functional_test.go:1839: Checking for existence of /usr/share/ca-certificates/22140.pem within VM
functional_test.go:1840: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915015618-22140 ssh "sudo cat /usr/share/ca-certificates/22140.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1840: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915015618-22140 ssh "sudo cat /usr/share/ca-certificates/22140.pem": (4.3485279s)
functional_test.go:1839: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1840: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915015618-22140 ssh "sudo cat /etc/ssl/certs/51391683.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1840: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915015618-22140 ssh "sudo cat /etc/ssl/certs/51391683.0": (6.6093653s)
functional_test.go:1866: Checking for existence of /etc/ssl/certs/221402.pem within VM
functional_test.go:1867: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915015618-22140 ssh "sudo cat /etc/ssl/certs/221402.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1867: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915015618-22140 ssh "sudo cat /etc/ssl/certs/221402.pem": (4.3632357s)
functional_test.go:1866: Checking for existence of /usr/share/ca-certificates/221402.pem within VM
functional_test.go:1867: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915015618-22140 ssh "sudo cat /usr/share/ca-certificates/221402.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1867: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915015618-22140 ssh "sudo cat /usr/share/ca-certificates/221402.pem": (4.3102769s)
functional_test.go:1866: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1867: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915015618-22140 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1867: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915015618-22140 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (4.2875295s)
--- PASS: TestFunctional/parallel/CertSync (27.98s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-20210915015618-22140 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/LoadImage (16.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/LoadImage
=== PAUSE TestFunctional/parallel/LoadImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:241: (dbg) Run:  docker pull busybox:1.33
functional_test.go:241: (dbg) Done: docker pull busybox:1.33: (3.5782479s)
functional_test.go:248: (dbg) Run:  docker tag busybox:1.33 docker.io/library/busybox:load-functional-20210915015618-22140
functional_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915015618-22140 image load --daemon docker.io/library/busybox:load-functional-20210915015618-22140
functional_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915015618-22140 image load --daemon docker.io/library/busybox:load-functional-20210915015618-22140: (6.9926905s)
functional_test.go:470: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p functional-20210915015618-22140 -- docker image inspect docker.io/library/busybox:load-functional-20210915015618-22140
functional_test.go:470: (dbg) Done: out/minikube-windows-amd64.exe ssh -p functional-20210915015618-22140 -- docker image inspect docker.io/library/busybox:load-functional-20210915015618-22140: (5.2438959s)
--- PASS: TestFunctional/parallel/LoadImage (16.66s)

                                                
                                    
x
+
TestFunctional/parallel/SaveImage (16.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/SaveImage
=== PAUSE TestFunctional/parallel/SaveImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SaveImage
functional_test.go:379: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915015618-22140 image pull docker.io/library/busybox:1.29
functional_test.go:379: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915015618-22140 image pull docker.io/library/busybox:1.29: (6.6644305s)
functional_test.go:388: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915015618-22140 image tag docker.io/library/busybox:1.29 docker.io/library/busybox:save-functional-20210915015618-22140
functional_test.go:388: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915015618-22140 image tag docker.io/library/busybox:1.29 docker.io/library/busybox:save-functional-20210915015618-22140: (2.718594s)
functional_test.go:394: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915015618-22140 image save --daemon docker.io/library/busybox:save-functional-20210915015618-22140
functional_test.go:394: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915015618-22140 image save --daemon docker.io/library/busybox:save-functional-20210915015618-22140: (6.6723254s)
functional_test.go:400: (dbg) Run:  docker images busybox
--- PASS: TestFunctional/parallel/SaveImage (16.81s)

                                                
                                    
x
+
TestFunctional/parallel/RemoveImage (19.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/RemoveImage
=== PAUSE TestFunctional/parallel/RemoveImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:333: (dbg) Run:  docker pull busybox:1.32

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:333: (dbg) Done: docker pull busybox:1.32: (3.9418535s)
functional_test.go:340: (dbg) Run:  docker tag busybox:1.32 docker.io/library/busybox:remove-functional-20210915015618-22140

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:346: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915015618-22140 image load docker.io/library/busybox:remove-functional-20210915015618-22140

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:346: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915015618-22140 image load docker.io/library/busybox:remove-functional-20210915015618-22140: (6.2948857s)
functional_test.go:352: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915015618-22140 image rm docker.io/library/busybox:remove-functional-20210915015618-22140
functional_test.go:352: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915015618-22140 image rm docker.io/library/busybox:remove-functional-20210915015618-22140: (2.4465557s)
functional_test.go:484: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p functional-20210915015618-22140 -- docker images

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:484: (dbg) Done: out/minikube-windows-amd64.exe ssh -p functional-20210915015618-22140 -- docker images: (5.8518284s)
--- PASS: TestFunctional/parallel/RemoveImage (19.50s)

                                                
                                    
x
+
TestFunctional/parallel/SaveImageToFile (20.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/SaveImageToFile
=== PAUSE TestFunctional/parallel/SaveImageToFile

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SaveImageToFile
functional_test.go:421: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915015618-22140 image pull docker.io/library/busybox:1.30

                                                
                                                
=== CONT  TestFunctional/parallel/SaveImageToFile
functional_test.go:421: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915015618-22140 image pull docker.io/library/busybox:1.30: (5.9002289s)
functional_test.go:429: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915015618-22140 image tag docker.io/library/busybox:1.30 docker.io/library/busybox:save-to-file-functional-20210915015618-22140

                                                
                                                
=== CONT  TestFunctional/parallel/SaveImageToFile
functional_test.go:429: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915015618-22140 image tag docker.io/library/busybox:1.30 docker.io/library/busybox:save-to-file-functional-20210915015618-22140: (5.1171597s)
functional_test.go:440: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915015618-22140 image save docker.io/library/busybox:save-to-file-functional-20210915015618-22140 C:\jenkins\workspace\Docker_Windows_integration\busybox-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/SaveImageToFile
functional_test.go:440: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915015618-22140 image save docker.io/library/busybox:save-to-file-functional-20210915015618-22140 C:\jenkins\workspace\Docker_Windows_integration\busybox-save.tar: (6.0944798s)
functional_test.go:446: (dbg) Run:  docker load -i C:\jenkins\workspace\Docker_Windows_integration\busybox-save.tar
functional_test.go:446: (dbg) Done: docker load -i C:\jenkins\workspace\Docker_Windows_integration\busybox-save.tar: (2.6261871s)
functional_test.go:453: (dbg) Run:  docker images busybox
--- PASS: TestFunctional/parallel/SaveImageToFile (20.68s)

                                                
                                    
x
+
TestFunctional/parallel/BuildImage (16.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/BuildImage
=== PAUSE TestFunctional/parallel/BuildImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/BuildImage
functional_test.go:504: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915015618-22140 image build -t localhost/my-image:functional-20210915015618-22140 testdata\build

                                                
                                                
=== CONT  TestFunctional/parallel/BuildImage
functional_test.go:504: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915015618-22140 image build -t localhost/my-image:functional-20210915015618-22140 testdata\build: (10.3224598s)
functional_test.go:509: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-20210915015618-22140 image build -t localhost/my-image:functional-20210915015618-22140 testdata\build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM busybox
latest: Pulling from library/busybox
24fb2886d6f6: Pulling fs layer
24fb2886d6f6: Verifying Checksum
24fb2886d6f6: Download complete
24fb2886d6f6: Pull complete
Digest: sha256:52f73a0a43a16cf37cd0720c90887ce972fe60ee06a687ee71fb93a7ca601df7
Status: Downloaded newer image for busybox:latest
---> 16ea53ea7c65
Step 2/3 : RUN true
---> Running in 1204aa4de794
Removing intermediate container 1204aa4de794
---> d3c2a1b8cf94
Step 3/3 : ADD content.txt /
---> fc45faca2e10
Successfully built fc45faca2e10
Successfully tagged localhost/my-image:functional-20210915015618-22140
functional_test.go:470: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p functional-20210915015618-22140 -- docker image inspect localhost/my-image:functional-20210915015618-22140

                                                
                                                
=== CONT  TestFunctional/parallel/BuildImage
functional_test.go:470: (dbg) Done: out/minikube-windows-amd64.exe ssh -p functional-20210915015618-22140 -- docker image inspect localhost/my-image:functional-20210915015618-22140: (6.0342785s)
--- PASS: TestFunctional/parallel/BuildImage (16.36s)

                                                
                                    
x
+
TestFunctional/parallel/ListImages (4.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ListImages
=== PAUSE TestFunctional/parallel/ListImages

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ListImages
functional_test.go:538: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915015618-22140 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ListImages
functional_test.go:538: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915015618-22140 image ls: (4.2802296s)
functional_test.go:543: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-20210915015618-22140 image ls:
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.5
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.22.1
k8s.gcr.io/kube-proxy:v1.22.1
k8s.gcr.io/kube-controller-manager:v1.22.1
k8s.gcr.io/kube-apiserver:v1.22.1
k8s.gcr.io/etcd:3.5.0-0
k8s.gcr.io/echoserver:1.8
k8s.gcr.io/coredns/coredns:v1.8.4
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/minikube-local-cache-test:functional-20210915015618-22140
docker.io/kubernetesui/metrics-scraper:v1.0.4
docker.io/kubernetesui/dashboard:v2.1.0
functional_test.go:546: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-20210915015618-22140 image ls:
! Executing "docker container inspect functional-20210915015618-22140 --format={{.State.Status}}" took an unusually long time: 2.5977668s
* Restarting the docker service may improve performance.
--- PASS: TestFunctional/parallel/ListImages (4.28s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (5.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1894: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915015618-22140 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1894: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20210915015618-22140 ssh "sudo systemctl is-active crio": exit status 1 (5.7921246s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	! Executing "docker container inspect functional-20210915015618-22140 --format={{.State.Status}}" took an unusually long time: 2.2837635s
	* Restarting the docker service may improve performance.
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (5.79s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (5.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1322: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
E0915 02:03:22.151050   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915013001-22140\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1322: (dbg) Done: out/minikube-windows-amd64.exe profile lis: (1.3452557s)
functional_test.go:1326: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1326: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (4.6056122s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (5.95s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (6.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1360: (dbg) Run:  out/minikube-windows-amd64.exe profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1360: (dbg) Done: out/minikube-windows-amd64.exe profile list: (5.8065827s)
functional_test.go:1365: Took "5.8067829s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1374: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1379: Took "643.7541ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (6.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (5.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1410: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1410: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (4.9021904s)
functional_test.go:1415: Took "4.9027826s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1423: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1428: Took "523.1322ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (5.43s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2123: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915015618-22140 version --short
--- PASS: TestFunctional/parallel/Version/short (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (7.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2136: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915015618-22140 version -o=json --components

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2136: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915015618-22140 version -o=json --components: (7.8742964s)
--- PASS: TestFunctional/parallel/Version/components (7.87s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-20210915015618-22140 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (101.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-20210915015618-22140 apply -f testdata\testsvc.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Done: kubectl --context functional-20210915015618-22140 apply -f testdata\testsvc.yaml: (1.3783893s)
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:343: "nginx-svc" [e2437971-9590-42bb-a403-dfa7943d0609] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:343: "nginx-svc" [e2437971-9590-42bb-a403-dfa7943d0609] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 1m40.3187033s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (101.82s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20210915015618-22140 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-20210915015618-22140 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to kill pid 20116: DuplicateHandle: The handle is invalid.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (21.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:601: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-20210915015618-22140 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-20210915015618-22140"

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv/powershell
functional_test.go:601: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-20210915015618-22140 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-20210915015618-22140": (13.1765462s)
functional_test.go:622: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-20210915015618-22140 docker-env | Invoke-Expression ; docker images"

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv/powershell
functional_test.go:622: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-20210915015618-22140 docker-env | Invoke-Expression ; docker images": (8.0414041s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (21.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (3.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:1985: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915015618-22140 update-context --alsologtostderr -v=2

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:1985: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915015618-22140 update-context --alsologtostderr -v=2: (3.5800372s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (3.58s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (3.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:1985: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915015618-22140 update-context --alsologtostderr -v=2

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:1985: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915015618-22140 update-context --alsologtostderr -v=2: (3.362654s)
E0915 02:08:22.145008   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915013001-22140\client.crt: The system cannot find the path specified.
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (3.37s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:1985: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915015618-22140 update-context --alsologtostderr -v=2

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:1985: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915015618-22140 update-context --alsologtostderr -v=2: (1.857436s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (1.86s)

                                                
                                    
x
+
TestFunctional/delete_busybox_image (1.78s)

                                                
                                                
=== RUN   TestFunctional/delete_busybox_image
functional_test.go:186: (dbg) Run:  docker rmi -f docker.io/library/busybox:load-functional-20210915015618-22140
functional_test.go:191: (dbg) Run:  docker rmi -f docker.io/library/busybox:remove-functional-20210915015618-22140
--- PASS: TestFunctional/delete_busybox_image (1.78s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.77s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-20210915015618-22140
--- PASS: TestFunctional/delete_my-image_image (0.77s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.75s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20210915015618-22140
--- PASS: TestFunctional/delete_minikube_cached_images (0.75s)

                                                
                                    
x
+
TestJSONOutput/start/Command (223.26s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-20210915020903-22140 --output=json --user=testUser --memory=2200 --wait=true --driver=docker
E0915 02:09:45.224667   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915013001-22140\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-20210915020903-22140 --output=json --user=testUser --memory=2200 --wait=true --driver=docker: (3m43.25601s)
--- PASS: TestJSONOutput/start/Command (223.26s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (6.07s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-20210915020903-22140 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-20210915020903-22140 --output=json --user=testUser: (6.0691404s)
--- PASS: TestJSONOutput/pause/Command (6.07s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (5.46s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-20210915020903-22140 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-20210915020903-22140 --output=json --user=testUser: (5.458372s)
--- PASS: TestJSONOutput/unpause/Command (5.46s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (18.84s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-20210915020903-22140 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-20210915020903-22140 --output=json --user=testUser: (18.84342s)
--- PASS: TestJSONOutput/stop/Command (18.84s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (5.43s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-20210915021329-22140 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-20210915021329-22140 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (440.0581ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c4e33b21-d418-49dd-8c5a-feac01115fa3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-20210915021329-22140] minikube v1.23.0 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7ddfb26c-8654-4e7a-aa9f-7d36c4d9d3c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"6ad2c6ff-677d-4d75-b5a9-3b6719d63f6b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"daefb423-0f6b-46ec-a56d-c23898d3a867","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=12425"}}
	{"specversion":"1.0","id":"ec32b88e-f3ab-4fbb-adb0-f2686e54a7c2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-20210915021329-22140" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-20210915021329-22140
E0915 02:13:31.464972   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915015618-22140\client.crt: The system cannot find the path specified.
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-20210915021329-22140: (4.9833327s)
--- PASS: TestErrorJSONOutput (5.43s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (209.73s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-20210915021334-22140 --network=
E0915 02:13:41.707347   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915015618-22140\client.crt: The system cannot find the path specified.
E0915 02:14:02.189416   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915015618-22140\client.crt: The system cannot find the path specified.
E0915 02:14:43.153615   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915015618-22140\client.crt: The system cannot find the path specified.
E0915 02:16:05.075965   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915015618-22140\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-20210915021334-22140 --network=: (3m14.1208755s)
kic_custom_network_test.go:102: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-20210915021334-22140" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-20210915021334-22140
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-20210915021334-22140: (14.8757656s)
--- PASS: TestKicCustomNetwork/create_custom_network (209.73s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (209.41s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-20210915021704-22140 --network=bridge
E0915 02:18:21.214536   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915015618-22140\client.crt: The system cannot find the path specified.
E0915 02:18:22.144461   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915013001-22140\client.crt: The system cannot find the path specified.
E0915 02:18:48.921075   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915015618-22140\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-20210915021704-22140 --network=bridge: (3m13.0715648s)
kic_custom_network_test.go:102: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-20210915021704-22140" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-20210915021704-22140
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-20210915021704-22140: (15.6474577s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (209.41s)

                                                
                                    
x
+
TestKicExistingNetwork (211.35s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:102: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:94: (dbg) Run:  out/minikube-windows-amd64.exe start -p existing-network-20210915022036-22140 --network=existing-network
E0915 02:23:21.216674   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915015618-22140\client.crt: The system cannot find the path specified.
E0915 02:23:22.143765   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915013001-22140\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:94: (dbg) Done: out/minikube-windows-amd64.exe start -p existing-network-20210915022036-22140 --network=existing-network: (3m11.5463341s)
helpers_test.go:176: Cleaning up "existing-network-20210915022036-22140" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p existing-network-20210915022036-22140
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p existing-network-20210915022036-22140: (15.8452427s)
--- PASS: TestKicExistingNetwork (211.35s)

                                                
                                    
x
+
TestMainNoArgs (0.49s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.49s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (405.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:82: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20210915022405-22140 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker
E0915 02:26:25.224370   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915013001-22140\client.crt: The system cannot find the path specified.
E0915 02:28:21.213403   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915015618-22140\client.crt: The system cannot find the path specified.
E0915 02:28:22.141941   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915013001-22140\client.crt: The system cannot find the path specified.
E0915 02:29:44.282634   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915015618-22140\client.crt: The system cannot find the path specified.
multinode_test.go:82: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-20210915022405-22140 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker: (6m37.9375957s)
multinode_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210915022405-22140 status --alsologtostderr
multinode_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20210915022405-22140 status --alsologtostderr: (7.8548287s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (405.79s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (25.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:463: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20210915022405-22140 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:463: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20210915022405-22140 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: (1.7578872s)
multinode_test.go:468: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20210915022405-22140 -- rollout status deployment/busybox
multinode_test.go:468: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20210915022405-22140 -- rollout status deployment/busybox: (7.0326788s)
multinode_test.go:474: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20210915022405-22140 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:486: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20210915022405-22140 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:494: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20210915022405-22140 -- exec busybox-84b6686758-4qczx -- nslookup kubernetes.io
multinode_test.go:494: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20210915022405-22140 -- exec busybox-84b6686758-4qczx -- nslookup kubernetes.io: (4.341375s)
multinode_test.go:494: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20210915022405-22140 -- exec busybox-84b6686758-5px4t -- nslookup kubernetes.io
multinode_test.go:494: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20210915022405-22140 -- exec busybox-84b6686758-5px4t -- nslookup kubernetes.io: (2.2107201s)
multinode_test.go:504: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20210915022405-22140 -- exec busybox-84b6686758-4qczx -- nslookup kubernetes.default
multinode_test.go:504: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20210915022405-22140 -- exec busybox-84b6686758-4qczx -- nslookup kubernetes.default: (1.2294682s)
multinode_test.go:504: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20210915022405-22140 -- exec busybox-84b6686758-5px4t -- nslookup kubernetes.default
multinode_test.go:504: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20210915022405-22140 -- exec busybox-84b6686758-5px4t -- nslookup kubernetes.default: (2.4867247s)
multinode_test.go:512: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20210915022405-22140 -- exec busybox-84b6686758-4qczx -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:512: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20210915022405-22140 -- exec busybox-84b6686758-4qczx -- nslookup kubernetes.default.svc.cluster.local: (2.3809745s)
multinode_test.go:512: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20210915022405-22140 -- exec busybox-84b6686758-5px4t -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:512: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20210915022405-22140 -- exec busybox-84b6686758-5px4t -- nslookup kubernetes.default.svc.cluster.local: (2.4472201s)
--- PASS: TestMultiNode/serial/DeployApp2Nodes (25.42s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (9.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:522: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20210915022405-22140 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:522: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20210915022405-22140 -- get pods -o jsonpath='{.items[*].metadata.name}': (2.0888686s)
multinode_test.go:530: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20210915022405-22140 -- exec busybox-84b6686758-4qczx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:530: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20210915022405-22140 -- exec busybox-84b6686758-4qczx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3": (2.4788125s)
multinode_test.go:538: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20210915022405-22140 -- exec busybox-84b6686758-4qczx -- sh -c "ping -c 1 192.168.65.2"
multinode_test.go:538: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20210915022405-22140 -- exec busybox-84b6686758-4qczx -- sh -c "ping -c 1 192.168.65.2": (1.1484227s)
multinode_test.go:530: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20210915022405-22140 -- exec busybox-84b6686758-5px4t -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:530: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20210915022405-22140 -- exec busybox-84b6686758-5px4t -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3": (1.1650544s)
multinode_test.go:538: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20210915022405-22140 -- exec busybox-84b6686758-5px4t -- sh -c "ping -c 1 192.168.65.2"
multinode_test.go:538: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20210915022405-22140 -- exec busybox-84b6686758-5px4t -- sh -c "ping -c 1 192.168.65.2": (2.572452s)
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (9.46s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (163.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-20210915022405-22140 -v 3 --alsologtostderr
E0915 02:33:21.213174   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915015618-22140\client.crt: The system cannot find the path specified.
E0915 02:33:22.140849   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915013001-22140\client.crt: The system cannot find the path specified.
multinode_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-20210915022405-22140 -v 3 --alsologtostderr: (2m33.5196239s)
multinode_test.go:113: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210915022405-22140 status --alsologtostderr
multinode_test.go:113: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20210915022405-22140 status --alsologtostderr: (10.1318938s)
--- PASS: TestMultiNode/serial/AddNode (163.65s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (5.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:129: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:129: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (5.1711888s)
--- PASS: TestMultiNode/serial/ProfileList (5.17s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (35.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:170: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210915022405-22140 status --output json --alsologtostderr
multinode_test.go:170: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20210915022405-22140 status --output json --alsologtostderr: (9.8823124s)
helpers_test.go:535: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210915022405-22140 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:535: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20210915022405-22140 cp testdata\cp-test.txt /home/docker/cp-test.txt: (2.8274049s)
helpers_test.go:549: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210915022405-22140 ssh "sudo cat /home/docker/cp-test.txt"
helpers_test.go:549: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20210915022405-22140 ssh "sudo cat /home/docker/cp-test.txt": (4.871278s)
helpers_test.go:535: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210915022405-22140 cp testdata\cp-test.txt multinode-20210915022405-22140-m02:/home/docker/cp-test.txt
helpers_test.go:535: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20210915022405-22140 cp testdata\cp-test.txt multinode-20210915022405-22140-m02:/home/docker/cp-test.txt: (3.5094308s)
helpers_test.go:549: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210915022405-22140 ssh -n multinode-20210915022405-22140-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:549: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20210915022405-22140 ssh -n multinode-20210915022405-22140-m02 "sudo cat /home/docker/cp-test.txt": (4.8297749s)
helpers_test.go:535: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210915022405-22140 cp testdata\cp-test.txt multinode-20210915022405-22140-m03:/home/docker/cp-test.txt
helpers_test.go:535: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20210915022405-22140 cp testdata\cp-test.txt multinode-20210915022405-22140-m03:/home/docker/cp-test.txt: (4.7284025s)
helpers_test.go:549: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210915022405-22140 ssh -n multinode-20210915022405-22140-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:549: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20210915022405-22140 ssh -n multinode-20210915022405-22140-m03 "sudo cat /home/docker/cp-test.txt": (4.7953093s)
--- PASS: TestMultiNode/serial/CopyFile (35.45s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (21.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:192: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210915022405-22140 node stop m03
multinode_test.go:192: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20210915022405-22140 node stop m03: (6.8132728s)
multinode_test.go:198: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210915022405-22140 status
multinode_test.go:198: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20210915022405-22140 status: exit status 7 (6.7742943s)

                                                
                                                
-- stdout --
	multinode-20210915022405-22140
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20210915022405-22140-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20210915022405-22140-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210915022405-22140 status --alsologtostderr
multinode_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20210915022405-22140 status --alsologtostderr: exit status 7 (8.15578s)

                                                
                                                
-- stdout --
	multinode-20210915022405-22140
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20210915022405-22140-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20210915022405-22140-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 02:35:04.806580   35928 out.go:298] Setting OutFile to fd 1092 ...
	I0915 02:35:04.809299   35928 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0915 02:35:04.809299   35928 out.go:311] Setting ErrFile to fd 1604...
	I0915 02:35:04.809299   35928 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0915 02:35:04.842405   35928 out.go:305] Setting JSON to false
	I0915 02:35:04.842405   35928 mustload.go:65] Loading cluster: multinode-20210915022405-22140
	I0915 02:35:04.843430   35928 config.go:177] Loaded profile config "multinode-20210915022405-22140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.1
	I0915 02:35:04.843430   35928 status.go:253] checking status of multinode-20210915022405-22140 ...
	I0915 02:35:04.889327   35928 cli_runner.go:115] Run: docker container inspect multinode-20210915022405-22140 --format={{.State.Status}}
	I0915 02:35:06.854495   35928 cli_runner.go:168] Completed: docker container inspect multinode-20210915022405-22140 --format={{.State.Status}}: (1.9648373s)
	I0915 02:35:06.854495   35928 status.go:328] multinode-20210915022405-22140 host status = "Running" (err=<nil>)
	I0915 02:35:06.854688   35928 host.go:66] Checking if "multinode-20210915022405-22140" exists ...
	I0915 02:35:06.862432   35928 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210915022405-22140
	I0915 02:35:07.517301   35928 host.go:66] Checking if "multinode-20210915022405-22140" exists ...
	I0915 02:35:07.535001   35928 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 02:35:07.544602   35928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210915022405-22140
	I0915 02:35:08.133479   35928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57674 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\multinode-20210915022405-22140\id_rsa Username:docker}
	I0915 02:35:08.368312   35928 ssh_runner.go:152] Run: systemctl --version
	I0915 02:35:08.422501   35928 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
	I0915 02:35:08.493369   35928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20210915022405-22140
	I0915 02:35:09.154576   35928 kubeconfig.go:93] found "multinode-20210915022405-22140" server: "https://127.0.0.1:57673"
	I0915 02:35:09.155121   35928 api_server.go:164] Checking apiserver status ...
	I0915 02:35:09.200116   35928 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 02:35:09.351850   35928 ssh_runner.go:152] Run: sudo egrep ^[0-9]+:freezer: /proc/2083/cgroup
	I0915 02:35:09.418415   35928 api_server.go:180] apiserver freezer: "7:freezer:/docker/1d12e25d1a98f775432c1058384186d1aa75a80b0cc400cb47e9bc47295ae836/kubepods/burstable/pod37a0fc703d1c753cbb21385c0f393bbd/ee2adbd4c6b94b4d9cf179673d857d718d79c0c9b4a87367b887774009c114e9"
	I0915 02:35:09.428957   35928 ssh_runner.go:152] Run: sudo cat /sys/fs/cgroup/freezer/docker/1d12e25d1a98f775432c1058384186d1aa75a80b0cc400cb47e9bc47295ae836/kubepods/burstable/pod37a0fc703d1c753cbb21385c0f393bbd/ee2adbd4c6b94b4d9cf179673d857d718d79c0c9b4a87367b887774009c114e9/freezer.state
	I0915 02:35:09.497658   35928 api_server.go:202] freezer state: "THAWED"
	I0915 02:35:09.497795   35928 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:57673/healthz ...
	I0915 02:35:09.566175   35928 api_server.go:265] https://127.0.0.1:57673/healthz returned 200:
	ok
	I0915 02:35:09.570699   35928 status.go:419] multinode-20210915022405-22140 apiserver status = Running (err=<nil>)
	I0915 02:35:09.570699   35928 status.go:255] multinode-20210915022405-22140 status: &{Name:multinode-20210915022405-22140 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 02:35:09.570699   35928 status.go:253] checking status of multinode-20210915022405-22140-m02 ...
	I0915 02:35:09.594258   35928 cli_runner.go:115] Run: docker container inspect multinode-20210915022405-22140-m02 --format={{.State.Status}}
	I0915 02:35:10.269753   35928 status.go:328] multinode-20210915022405-22140-m02 host status = "Running" (err=<nil>)
	I0915 02:35:10.270718   35928 host.go:66] Checking if "multinode-20210915022405-22140-m02" exists ...
	I0915 02:35:10.283222   35928 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210915022405-22140-m02
	I0915 02:35:10.904018   35928 host.go:66] Checking if "multinode-20210915022405-22140-m02" exists ...
	I0915 02:35:10.919128   35928 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 02:35:10.930571   35928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210915022405-22140-m02
	I0915 02:35:11.552393   35928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57718 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\multinode-20210915022405-22140-m02\id_rsa Username:docker}
	I0915 02:35:11.803436   35928 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
	I0915 02:35:11.857272   35928 status.go:255] multinode-20210915022405-22140-m02 status: &{Name:multinode-20210915022405-22140-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0915 02:35:11.857567   35928 status.go:253] checking status of multinode-20210915022405-22140-m03 ...
	I0915 02:35:11.877873   35928 cli_runner.go:115] Run: docker container inspect multinode-20210915022405-22140-m03 --format={{.State.Status}}
	I0915 02:35:12.543529   35928 status.go:328] multinode-20210915022405-22140-m03 host status = "Stopped" (err=<nil>)
	I0915 02:35:12.543988   35928 status.go:341] host is not running, skipping remaining checks
	I0915 02:35:12.544171   35928 status.go:255] multinode-20210915022405-22140-m03 status: &{Name:multinode-20210915022405-22140-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (21.75s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (122.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:226: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:236: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210915022405-22140 node start m03 --alsologtostderr
multinode_test.go:236: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20210915022405-22140 node start m03 --alsologtostderr: (1m51.4714768s)
multinode_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210915022405-22140 status
multinode_test.go:243: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20210915022405-22140 status: (10.1476346s)
multinode_test.go:257: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (122.52s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (241.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:265: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-20210915022405-22140
multinode_test.go:272: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-20210915022405-22140
multinode_test.go:272: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-20210915022405-22140: (37.4974712s)
multinode_test.go:277: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20210915022405-22140 --wait=true -v=8 --alsologtostderr
E0915 02:38:21.210818   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915015618-22140\client.crt: The system cannot find the path specified.
E0915 02:38:22.139853   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915013001-22140\client.crt: The system cannot find the path specified.
multinode_test.go:277: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-20210915022405-22140 --wait=true -v=8 --alsologtostderr: (3m22.7385954s)
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-20210915022405-22140
--- PASS: TestMultiNode/serial/RestartKeepsNodes (241.26s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (32.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:376: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210915022405-22140 node delete m03
multinode_test.go:376: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20210915022405-22140 node delete m03: (23.7724076s)
multinode_test.go:382: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210915022405-22140 status --alsologtostderr
multinode_test.go:382: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20210915022405-22140 status --alsologtostderr: (7.5925726s)
multinode_test.go:396: (dbg) Run:  docker volume ls
multinode_test.go:406: (dbg) Run:  kubectl get nodes
multinode_test.go:414: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (32.83s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (38.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210915022405-22140 stop
multinode_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20210915022405-22140 stop: (32.5346192s)
multinode_test.go:302: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210915022405-22140 status
multinode_test.go:302: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20210915022405-22140 status: exit status 7 (3.098318s)

                                                
                                                
-- stdout --
	multinode-20210915022405-22140
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20210915022405-22140-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210915022405-22140 status --alsologtostderr
multinode_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20210915022405-22140 status --alsologtostderr: exit status 7 (3.0612965s)

                                                
                                                
-- stdout --
	multinode-20210915022405-22140
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20210915022405-22140-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 02:42:25.199373   55428 out.go:298] Setting OutFile to fd 1660 ...
	I0915 02:42:25.201319   55428 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0915 02:42:25.201319   55428 out.go:311] Setting ErrFile to fd 1152...
	I0915 02:42:25.201319   55428 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0915 02:42:25.219861   55428 out.go:305] Setting JSON to false
	I0915 02:42:25.219861   55428 mustload.go:65] Loading cluster: multinode-20210915022405-22140
	I0915 02:42:25.221112   55428 config.go:177] Loaded profile config "multinode-20210915022405-22140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.1
	I0915 02:42:25.221112   55428 status.go:253] checking status of multinode-20210915022405-22140 ...
	I0915 02:42:25.241460   55428 cli_runner.go:115] Run: docker container inspect multinode-20210915022405-22140 --format={{.State.Status}}
	I0915 02:42:27.217453   55428 cli_runner.go:168] Completed: docker container inspect multinode-20210915022405-22140 --format={{.State.Status}}: (1.9760009s)
	I0915 02:42:27.217453   55428 status.go:328] multinode-20210915022405-22140 host status = "Stopped" (err=<nil>)
	I0915 02:42:27.217453   55428 status.go:341] host is not running, skipping remaining checks
	I0915 02:42:27.217453   55428 status.go:255] multinode-20210915022405-22140 status: &{Name:multinode-20210915022405-22140 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 02:42:27.217453   55428 status.go:253] checking status of multinode-20210915022405-22140-m02 ...
	I0915 02:42:27.235433   55428 cli_runner.go:115] Run: docker container inspect multinode-20210915022405-22140-m02 --format={{.State.Status}}
	I0915 02:42:27.846683   55428 status.go:328] multinode-20210915022405-22140-m02 host status = "Stopped" (err=<nil>)
	I0915 02:42:27.846683   55428 status.go:341] host is not running, skipping remaining checks
	I0915 02:42:27.846683   55428 status.go:255] multinode-20210915022405-22140-m02 status: &{Name:multinode-20210915022405-22140-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (38.70s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (210.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:326: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:336: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20210915022405-22140 --wait=true -v=8 --alsologtostderr --driver=docker
E0915 02:43:05.224217   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915013001-22140\client.crt: The system cannot find the path specified.
E0915 02:43:21.208108   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915015618-22140\client.crt: The system cannot find the path specified.
E0915 02:43:22.143146   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915013001-22140\client.crt: The system cannot find the path specified.
multinode_test.go:336: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-20210915022405-22140 --wait=true -v=8 --alsologtostderr --driver=docker: (3m22.3146986s)
multinode_test.go:342: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210915022405-22140 status --alsologtostderr
multinode_test.go:342: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20210915022405-22140 status --alsologtostderr: (6.4420582s)
multinode_test.go:356: (dbg) Run:  kubectl get nodes
multinode_test.go:364: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (210.24s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (254.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:425: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-20210915022405-22140
multinode_test.go:434: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20210915022405-22140-m02 --driver=docker
multinode_test.go:434: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-20210915022405-22140-m02 --driver=docker: exit status 14 (497.2206ms)

                                                
                                                
-- stdout --
	* [multinode-20210915022405-22140-m02] minikube v1.23.0 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	  - KUBECONFIG=C:\Users\jenkins\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12425
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20210915022405-22140-m02' is duplicated with machine name 'multinode-20210915022405-22140-m02' in profile 'multinode-20210915022405-22140'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:442: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20210915022405-22140-m03 --driver=docker
E0915 02:46:24.280920   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915015618-22140\client.crt: The system cannot find the path specified.
E0915 02:48:21.209091   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915015618-22140\client.crt: The system cannot find the path specified.
E0915 02:48:22.139954   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915013001-22140\client.crt: The system cannot find the path specified.
multinode_test.go:442: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-20210915022405-22140-m03 --driver=docker: (3m48.8028869s)
multinode_test.go:449: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-20210915022405-22140
multinode_test.go:449: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node add -p multinode-20210915022405-22140: exit status 80 (4.7612116s)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20210915022405-22140
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20210915022405-22140-m03 already exists in multinode-20210915022405-22140-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                             │
	│    * If the above advice does not help, please let us know:                                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                               │
	│                                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                    │
	│    * Please also attach the following file to the GitHub issue:                                             │
	│    * - C:\Users\jenkins\AppData\Local\Temp\minikube_node_68dc163ecc1470275f97c1774d2d827d0925d552_16.log    │
	│                                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:454: (dbg) Run:  out/minikube-windows-amd64.exe delete -p multinode-20210915022405-22140-m03
multinode_test.go:454: (dbg) Done: out/minikube-windows-amd64.exe delete -p multinode-20210915022405-22140-m03: (19.6047392s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (254.15s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian_sid/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian_sid/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian_sid/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian_sid/kvm2-driver (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian_sid/kvm2-driver
--- PASS: TestDebPackageInstall/install_amd64_debian_sid/kvm2-driver (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian_latest/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian_latest/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian_latest/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian_latest/kvm2-driver (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian_latest/kvm2-driver
--- PASS: TestDebPackageInstall/install_amd64_debian_latest/kvm2-driver (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian_10/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian_10/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian_10/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian_10/kvm2-driver (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian_10/kvm2-driver
--- PASS: TestDebPackageInstall/install_amd64_debian_10/kvm2-driver (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian_9/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian_9/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian_9/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian_9/kvm2-driver (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian_9/kvm2-driver
--- PASS: TestDebPackageInstall/install_amd64_debian_9/kvm2-driver (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu_latest/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu_latest/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu_latest/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu_latest/kvm2-driver (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu_latest/kvm2-driver
--- PASS: TestDebPackageInstall/install_amd64_ubuntu_latest/kvm2-driver (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu_20.10/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu_20.10/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu_20.10/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu_20.10/kvm2-driver (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu_20.10/kvm2-driver
--- PASS: TestDebPackageInstall/install_amd64_ubuntu_20.10/kvm2-driver (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu_20.04/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu_20.04/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu_20.04/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu_20.04/kvm2-driver (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu_20.04/kvm2-driver
--- PASS: TestDebPackageInstall/install_amd64_ubuntu_20.04/kvm2-driver (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu_18.04/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu_18.04/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu_18.04/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu_18.04/kvm2-driver (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu_18.04/kvm2-driver
--- PASS: TestDebPackageInstall/install_amd64_ubuntu_18.04/kvm2-driver (0.00s)

                                                
                                    
x
+
TestPreload (498.1s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-20210915025042-22140 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.17.0
E0915 02:53:21.208733   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915015618-22140\client.crt: The system cannot find the path specified.
E0915 02:53:22.136523   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915013001-22140\client.crt: The system cannot find the path specified.
preload_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-20210915025042-22140 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.17.0: (4m21.86662s)
preload_test.go:62: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p test-preload-20210915025042-22140 -- docker pull busybox
preload_test.go:62: (dbg) Done: out/minikube-windows-amd64.exe ssh -p test-preload-20210915025042-22140 -- docker pull busybox: (7.6690223s)
preload_test.go:72: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-20210915025042-22140 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker --kubernetes-version=v1.17.3
E0915 02:58:21.207394   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915015618-22140\client.crt: The system cannot find the path specified.
E0915 02:58:22.136243   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915013001-22140\client.crt: The system cannot find the path specified.
preload_test.go:72: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-20210915025042-22140 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker --kubernetes-version=v1.17.3: (3m27.5063507s)
preload_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p test-preload-20210915025042-22140 -- docker images
preload_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe ssh -p test-preload-20210915025042-22140 -- docker images: (3.7588381s)
helpers_test.go:176: Cleaning up "test-preload-20210915025042-22140" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-20210915025042-22140
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-20210915025042-22140: (17.2984332s)
--- PASS: TestPreload (498.10s)

                                                
                                    
x
+
TestSkaffold (318.2s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:58: (dbg) Run:  C:\Users\jenkins\AppData\Local\Temp\skaffold.exe763621421 version
skaffold_test.go:62: skaffold version: v1.31.0
skaffold_test.go:65: (dbg) Run:  out/minikube-windows-amd64.exe start -p skaffold-20210915030334-22140 --memory=2600 --driver=docker
skaffold_test.go:65: (dbg) Done: out/minikube-windows-amd64.exe start -p skaffold-20210915030334-22140 --memory=2600 --driver=docker: (3m20.7241671s)
skaffold_test.go:85: copying out/minikube-windows-amd64.exe to C:\jenkins\workspace\Docker_Windows_integration\out\minikube.exe
skaffold_test.go:109: (dbg) Run:  C:\Users\jenkins\AppData\Local\Temp\skaffold.exe763621421 run --minikube-profile skaffold-20210915030334-22140 --kube-context skaffold-20210915030334-22140 --status-check=true --port-forward=false --interactive=false
E0915 03:08:21.204059   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915015618-22140\client.crt: The system cannot find the path specified.
E0915 03:08:22.133674   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915013001-22140\client.crt: The system cannot find the path specified.
skaffold_test.go:109: (dbg) Done: C:\Users\jenkins\AppData\Local\Temp\skaffold.exe763621421 run --minikube-profile skaffold-20210915030334-22140 --kube-context skaffold-20210915030334-22140 --status-check=true --port-forward=false --interactive=false: (1m26.8886768s)
skaffold_test.go:115: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:343: "leeroy-app-77578fdb67-bkmp8" [56c0a0ba-f8db-48e8-a494-8510d90330fe] Running
skaffold_test.go:115: (dbg) TestSkaffold: app=leeroy-app healthy within 5.071325s
skaffold_test.go:118: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:343: "leeroy-web-78c9dc7456-7q7nd" [636e5220-1202-441e-b645-03ff30048831] Running
skaffold_test.go:118: (dbg) TestSkaffold: app=leeroy-web healthy within 5.0348677s
helpers_test.go:176: Cleaning up "skaffold-20210915030334-22140" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p skaffold-20210915030334-22140
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p skaffold-20210915030334-22140: (18.9654466s)
--- PASS: TestSkaffold (318.20s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (1026.1s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Run:  C:\Users\jenkins\AppData\Local\Temp\minikube-v1.9.0.113419259.exe start -p running-upgrade-20210915030944-22140 --memory=2200 --vm-driver=docker

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Done: C:\Users\jenkins\AppData\Local\Temp\minikube-v1.9.0.113419259.exe start -p running-upgrade-20210915030944-22140 --memory=2200 --vm-driver=docker: (12m55.2423466s)
version_upgrade_test.go:138: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-20210915030944-22140 --memory=2200 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:138: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-20210915030944-22140 --memory=2200 --alsologtostderr -v=1 --driver=docker: (3m37.5132139s)
helpers_test.go:176: Cleaning up "running-upgrade-20210915030944-22140" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-20210915030944-22140

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-20210915030944-22140: (32.7832722s)
--- PASS: TestRunningBinaryUpgrade (1026.10s)

                                                
                                    
x
+
TestKubernetesUpgrade (1161.45s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:226: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20210915032703-22140 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:226: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20210915032703-22140 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=docker: (10m0.5036318s)
version_upgrade_test.go:231: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-20210915032703-22140
version_upgrade_test.go:231: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-20210915032703-22140: (14.4718632s)
version_upgrade_test.go:236: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-20210915032703-22140 status --format={{.Host}}
version_upgrade_test.go:236: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-20210915032703-22140 status --format={{.Host}}: exit status 7 (1.5183579s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:238: status error: exit status 7 (may be ok)
version_upgrade_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20210915032703-22140 --memory=2200 --kubernetes-version=v1.22.2-rc.0 --alsologtostderr -v=1 --driver=docker
E0915 03:38:21.197602   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915015618-22140\client.crt: The system cannot find the path specified.
E0915 03:38:22.127751   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915013001-22140\client.crt: The system cannot find the path specified.
E0915 03:38:23.729635   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210915030334-22140\client.crt: The system cannot find the path specified.
E0915 03:43:21.196658   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915015618-22140\client.crt: The system cannot find the path specified.
E0915 03:43:22.125671   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915013001-22140\client.crt: The system cannot find the path specified.
E0915 03:43:23.727243   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210915030334-22140\client.crt: The system cannot find the path specified.
version_upgrade_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20210915032703-22140 --memory=2200 --kubernetes-version=v1.22.2-rc.0 --alsologtostderr -v=1 --driver=docker: (6m33.332255s)
version_upgrade_test.go:252: (dbg) Run:  kubectl --context kubernetes-upgrade-20210915032703-22140 version --output=json
version_upgrade_test.go:271: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:273: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20210915032703-22140 --memory=2200 --kubernetes-version=v1.14.0 --driver=docker
version_upgrade_test.go:273: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20210915032703-22140 --memory=2200 --kubernetes-version=v1.14.0 --driver=docker: exit status 106 (527.4281ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20210915032703-22140] minikube v1.23.0 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	  - KUBECONFIG=C:\Users\jenkins\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12425
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.22.2-rc.0 cluster to v1.14.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.14.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-20210915032703-22140
	    minikube start -p kubernetes-upgrade-20210915032703-22140 --kubernetes-version=v1.14.0
	    
	    2) Create a second cluster with Kubernetes 1.14.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20210915032703-221402 --kubernetes-version=v1.14.0
	    
	    3) Use the existing cluster at version Kubernetes 1.22.2-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20210915032703-22140 --kubernetes-version=v1.22.2-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:277: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:279: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20210915032703-22140 --memory=2200 --kubernetes-version=v1.22.2-rc.0 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:279: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20210915032703-22140 --memory=2200 --kubernetes-version=v1.22.2-rc.0 --alsologtostderr -v=1 --driver=docker: (1m45.8974437s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-20210915032703-22140" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-20210915032703-22140

                                                
                                                
=== CONT  TestKubernetesUpgrade
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-20210915032703-22140: (44.8227444s)
--- PASS: TestKubernetesUpgrade (1161.45s)

                                                
                                    
x
+
TestMissingContainerUpgrade (1174.76s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:313: (dbg) Run:  C:\Users\jenkins\AppData\Local\Temp\minikube-v1.9.1.278031355.exe start -p missing-upgrade-20210915032655-22140 --memory=2200 --driver=docker

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:313: (dbg) Done: C:\Users\jenkins\AppData\Local\Temp\minikube-v1.9.1.278031355.exe start -p missing-upgrade-20210915032655-22140 --memory=2200 --driver=docker: (8m7.1845465s)
version_upgrade_test.go:322: (dbg) Run:  docker stop missing-upgrade-20210915032655-22140
version_upgrade_test.go:322: (dbg) Done: docker stop missing-upgrade-20210915032655-22140: (13.9425459s)
version_upgrade_test.go:327: (dbg) Run:  docker rm missing-upgrade-20210915032655-22140
version_upgrade_test.go:327: (dbg) Done: docker rm missing-upgrade-20210915032655-22140: (1.1007471s)
version_upgrade_test.go:333: (dbg) Run:  out/minikube-windows-amd64.exe start -p missing-upgrade-20210915032655-22140 --memory=2200 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:333: (dbg) Done: out/minikube-windows-amd64.exe start -p missing-upgrade-20210915032655-22140 --memory=2200 --alsologtostderr -v=1 --driver=docker: (10m35.4681727s)
helpers_test.go:176: Cleaning up "missing-upgrade-20210915032655-22140" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p missing-upgrade-20210915032655-22140

                                                
                                                
=== CONT  TestMissingContainerUpgrade
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p missing-upgrade-20210915032655-22140: (36.3086734s)
--- PASS: TestMissingContainerUpgrade (1174.76s)

                                                
                                    
x
+
TestPause/serial/Start (624.37s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-20210915030944-22140 --memory=2048 --install-addons=false --wait=all --driver=docker

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-20210915030944-22140 --memory=2048 --install-addons=false --wait=all --driver=docker: (10m24.3638218s)
--- PASS: TestPause/serial/Start (624.37s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (978.32s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:187: (dbg) Run:  C:\Users\jenkins\AppData\Local\Temp\minikube-v1.9.0.3499853798.exe start -p stopped-upgrade-20210915030944-22140 --memory=2200 --vm-driver=docker
E0915 03:13:21.204395   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915015618-22140\client.crt: The system cannot find the path specified.
E0915 03:13:22.132622   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915013001-22140\client.crt: The system cannot find the path specified.
E0915 03:13:23.734016   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210915030334-22140\client.crt: The system cannot find the path specified.
E0915 03:13:23.740241   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210915030334-22140\client.crt: The system cannot find the path specified.
E0915 03:13:23.756766   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210915030334-22140\client.crt: The system cannot find the path specified.
E0915 03:13:23.778701   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210915030334-22140\client.crt: The system cannot find the path specified.
E0915 03:13:23.819473   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210915030334-22140\client.crt: The system cannot find the path specified.
E0915 03:13:23.900413   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210915030334-22140\client.crt: The system cannot find the path specified.
E0915 03:13:24.061506   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210915030334-22140\client.crt: The system cannot find the path specified.
E0915 03:13:24.388398   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210915030334-22140\client.crt: The system cannot find the path specified.
E0915 03:13:25.029271   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210915030334-22140\client.crt: The system cannot find the path specified.
E0915 03:13:26.311143   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210915030334-22140\client.crt: The system cannot find the path specified.
E0915 03:13:28.878802   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210915030334-22140\client.crt: The system cannot find the path specified.
E0915 03:13:33.999223   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210915030334-22140\client.crt: The system cannot find the path specified.
E0915 03:13:44.240805   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210915030334-22140\client.crt: The system cannot find the path specified.
E0915 03:14:04.725472   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210915030334-22140\client.crt: The system cannot find the path specified.
E0915 03:14:45.688196   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210915030334-22140\client.crt: The system cannot find the path specified.
E0915 03:16:07.610591   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210915030334-22140\client.crt: The system cannot find the path specified.
E0915 03:16:25.221898   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915013001-22140\client.crt: The system cannot find the path specified.
E0915 03:18:21.201177   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915015618-22140\client.crt: The system cannot find the path specified.
E0915 03:18:22.131151   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915013001-22140\client.crt: The system cannot find the path specified.
E0915 03:18:23.733651   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210915030334-22140\client.crt: The system cannot find the path specified.
E0915 03:18:51.456678   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210915030334-22140\client.crt: The system cannot find the path specified.
E0915 03:19:44.278451   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915015618-22140\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:187: (dbg) Done: C:\Users\jenkins\AppData\Local\Temp\minikube-v1.9.0.3499853798.exe start -p stopped-upgrade-20210915030944-22140 --memory=2200 --vm-driver=docker: (11m58.6643366s)
version_upgrade_test.go:196: (dbg) Run:  C:\Users\jenkins\AppData\Local\Temp\minikube-v1.9.0.3499853798.exe -p stopped-upgrade-20210915030944-22140 stop

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Done: C:\Users\jenkins\AppData\Local\Temp\minikube-v1.9.0.3499853798.exe -p stopped-upgrade-20210915030944-22140 stop: (35.1392224s)
version_upgrade_test.go:202: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-20210915030944-22140 --memory=2200 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:202: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-20210915030944-22140 --memory=2200 --alsologtostderr -v=1 --driver=docker: (3m44.506469s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (978.32s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (94.44s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:90: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-20210915030944-22140 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestPause/serial/SecondStartNoReconfiguration
pause_test.go:90: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-20210915030944-22140 --alsologtostderr -v=1 --driver=docker: (1m34.3887596s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (94.44s)

                                                
                                    
x
+
TestPause/serial/Pause (18.18s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:108: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-20210915030944-22140 --alsologtostderr -v=5

                                                
                                                
=== CONT  TestPause/serial/Pause
pause_test.go:108: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-20210915030944-22140 --alsologtostderr -v=5: (18.1791379s)
--- PASS: TestPause/serial/Pause (18.18s)

                                                
                                    
x
+
TestPause/serial/Unpause (13.37s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:119: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p pause-20210915030944-22140 --alsologtostderr -v=5
pause_test.go:119: (dbg) Done: out/minikube-windows-amd64.exe unpause -p pause-20210915030944-22140 --alsologtostderr -v=5: (13.3738057s)
--- PASS: TestPause/serial/Unpause (13.37s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (24.21s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:108: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-20210915030944-22140 --alsologtostderr -v=5
E0915 03:23:21.201328   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915015618-22140\client.crt: The system cannot find the path specified.
E0915 03:23:22.129935   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915013001-22140\client.crt: The system cannot find the path specified.
pause_test.go:108: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-20210915030944-22140 --alsologtostderr -v=5: (24.2053235s)
--- PASS: TestPause/serial/PauseAgain (24.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (20.87s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:210: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-20210915030944-22140

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:210: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-20210915030944-22140: (20.8655272s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (20.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (589.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-20210915033621-22140 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.14.0
E0915 03:36:24.277275   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915015618-22140\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe start -p old-k8s-version-20210915033621-22140 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.14.0: (9m49.9892188s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (589.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (1028.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-20210915034542-22140 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.22.2-rc.0

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe start -p no-preload-20210915034542-22140 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.22.2-rc.0: (17m8.2228198s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (1028.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (20.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context old-k8s-version-20210915033621-22140 create -f testdata\busybox.yaml
start_stop_delete_test.go:181: (dbg) Done: kubectl --context old-k8s-version-20210915033621-22140 create -f testdata\busybox.yaml: (1.3201097s)
start_stop_delete_test.go:181: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [78b28b8a-15d7-11ec-aadc-024240e36d6c] Pending
helpers_test.go:343: "busybox" [78b28b8a-15d7-11ec-aadc-024240e36d6c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [78b28b8a-15d7-11ec-aadc-024240e36d6c] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:181: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 18.1201775s
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context old-k8s-version-20210915033621-22140 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (20.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (621.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-20210915034625-22140 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.22.1
E0915 03:46:26.814904   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210915030334-22140\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-20210915034625-22140 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.22.1: (10m21.9722212s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (621.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (7.72s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-20210915033621-22140 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-20210915033621-22140 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (5.7744033s)
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context old-k8s-version-20210915033621-22140 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:200: (dbg) Done: kubectl --context old-k8s-version-20210915033621-22140 describe deploy/metrics-server -n kube-system: (1.9314724s)
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (7.72s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (593.75s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-different-port-20210915034637-22140 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.22.1

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-different-port-20210915034637-22140 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.22.1: (9m53.7465544s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/FirstStart (593.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (23.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-windows-amd64.exe stop -p old-k8s-version-20210915033621-22140 --alsologtostderr -v=3
start_stop_delete_test.go:213: (dbg) Done: out/minikube-windows-amd64.exe stop -p old-k8s-version-20210915033621-22140 --alsologtostderr -v=3: (23.9803243s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (23.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (4.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20210915033621-22140 -n old-k8s-version-20210915033621-22140
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20210915033621-22140 -n old-k8s-version-20210915033621-22140: exit status 7 (3.2880435s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	! Executing "docker container inspect old-k8s-version-20210915033621-22140 --format={{.State.Status}}" took an unusually long time: 2.6600567s
	* Restarting the docker service may improve performance.

                                                
                                                
** /stderr **
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p old-k8s-version-20210915033621-22140 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:231: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p old-k8s-version-20210915033621-22140 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (1.4608584s)
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (4.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (827.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-20210915033621-22140 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.14.0
E0915 03:48:21.195265   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915015618-22140\client.crt: The system cannot find the path specified.
E0915 03:48:22.124907   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915013001-22140\client.crt: The system cannot find the path specified.
E0915 03:48:23.726716   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210915030334-22140\client.crt: The system cannot find the path specified.
E0915 03:49:45.229322   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915013001-22140\client.crt: The system cannot find the path specified.
E0915 03:53:04.277256   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915015618-22140\client.crt: The system cannot find the path specified.
E0915 03:53:21.194062   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915015618-22140\client.crt: The system cannot find the path specified.
E0915 03:53:22.123129   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915013001-22140\client.crt: The system cannot find the path specified.
E0915 03:53:23.726457   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210915030334-22140\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-windows-amd64.exe start -p old-k8s-version-20210915033621-22140 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.14.0: (13m38.717819s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20210915033621-22140 -n old-k8s-version-20210915033621-22140
start_stop_delete_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20210915033621-22140 -n old-k8s-version-20210915033621-22140: (8.4338631s)
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (827.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (68.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context default-k8s-different-port-20210915034637-22140 create -f testdata\busybox.yaml
start_stop_delete_test.go:181: (dbg) Done: kubectl --context default-k8s-different-port-20210915034637-22140 create -f testdata\busybox.yaml: (4.6410498s)
start_stop_delete_test.go:181: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [acca31ce-4f40-499d-bafc-087b8fedcb42] Pending
helpers_test.go:343: "busybox" [acca31ce-4f40-499d-bafc-087b8fedcb42] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
helpers_test.go:343: "busybox" [acca31ce-4f40-499d-bafc-087b8fedcb42] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:181: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: integration-test=busybox healthy within 59.4578243s
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context default-k8s-different-port-20210915034637-22140 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:181: (dbg) Done: kubectl --context default-k8s-different-port-20210915034637-22140 exec busybox -- /bin/sh -c "ulimit -n": (4.2538831s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/DeployApp (68.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (47.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context embed-certs-20210915034625-22140 create -f testdata\busybox.yaml
start_stop_delete_test.go:181: (dbg) Done: kubectl --context embed-certs-20210915034625-22140 create -f testdata\busybox.yaml: (4.4397169s)
start_stop_delete_test.go:181: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [e1f2dcbd-dd79-41d2-9ac0-c2f669d5908d] Pending
helpers_test.go:343: "busybox" [e1f2dcbd-dd79-41d2-9ac0-c2f669d5908d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [e1f2dcbd-dd79-41d2-9ac0-c2f669d5908d] Running

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:181: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 40.4666753s
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context embed-certs-20210915034625-22140 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:181: (dbg) Done: kubectl --context embed-certs-20210915034625-22140 exec busybox -- /bin/sh -c "ulimit -n": (2.4753186s)
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (47.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (16.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-20210915034625-22140 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-20210915034625-22140 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (15.3134648s)
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context embed-certs-20210915034625-22140 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (16.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (15.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-different-port-20210915034637-22140 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-different-port-20210915034637-22140 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (14.5497087s)
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context default-k8s-different-port-20210915034637-22140 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (15.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (33.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-windows-amd64.exe stop -p embed-certs-20210915034625-22140 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:213: (dbg) Done: out/minikube-windows-amd64.exe stop -p embed-certs-20210915034625-22140 --alsologtostderr -v=3: (33.9580001s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (33.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (36.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-windows-amd64.exe stop -p default-k8s-different-port-20210915034637-22140 --alsologtostderr -v=3
E0915 03:58:21.193527   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915015618-22140\client.crt: The system cannot find the path specified.
E0915 03:58:22.122754   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915013001-22140\client.crt: The system cannot find the path specified.
E0915 03:58:23.724659   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210915030334-22140\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:213: (dbg) Done: out/minikube-windows-amd64.exe stop -p default-k8s-different-port-20210915034637-22140 --alsologtostderr -v=3: (36.814175s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (36.81s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (4.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20210915034625-22140 -n embed-certs-20210915034625-22140
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20210915034625-22140 -n embed-certs-20210915034625-22140: exit status 7 (2.7636103s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	! Executing "docker container inspect embed-certs-20210915034625-22140 --format={{.State.Status}}" took an unusually long time: 2.1080435s
	* Restarting the docker service may improve performance.

                                                
                                                
** /stderr **
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p embed-certs-20210915034625-22140 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:231: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p embed-certs-20210915034625-22140 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (1.3060939s)
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (4.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (920.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-20210915034625-22140 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.22.1

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-20210915034625-22140 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.22.1: (15m11.9749189s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20210915034625-22140 -n embed-certs-20210915034625-22140

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20210915034625-22140 -n embed-certs-20210915034625-22140: (8.3638391s)
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (920.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (4.64s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20210915034637-22140 -n default-k8s-different-port-20210915034637-22140
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20210915034637-22140 -n default-k8s-different-port-20210915034637-22140: exit status 7 (1.4867164s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p default-k8s-different-port-20210915034637-22140 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:231: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p default-k8s-different-port-20210915034637-22140 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (3.1507609s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (4.64s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (949.69s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-different-port-20210915034637-22140 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.22.1

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-different-port-20210915034637-22140 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.22.1: (15m43.043618s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20210915034637-22140 -n default-k8s-different-port-20210915034637-22140

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20210915034637-22140 -n default-k8s-different-port-20210915034637-22140: (6.6460499s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/SecondStart (949.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-5d8978d65d-24nnp" [104cc1c5-15d9-11ec-a644-0242223fd995] Running
start_stop_delete_test.go:259: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.144565s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-5d8978d65d-24nnp" [104cc1c5-15d9-11ec-a644-0242223fd995] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.1955139s
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context old-k8s-version-20210915033621-22140 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (5.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p old-k8s-version-20210915033621-22140 "sudo crictl images -o json"
start_stop_delete_test.go:289: (dbg) Done: out/minikube-windows-amd64.exe ssh -p old-k8s-version-20210915033621-22140 "sudo crictl images -o json": (5.5770649s)
start_stop_delete_test.go:289: Found non-minikube image: busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (5.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (55.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe pause -p old-k8s-version-20210915033621-22140 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe pause -p old-k8s-version-20210915033621-22140 --alsologtostderr -v=1: (16.7834839s)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-20210915033621-22140 -n old-k8s-version-20210915033621-22140
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-20210915033621-22140 -n old-k8s-version-20210915033621-22140: exit status 2 (6.2556009s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
** stderr ** 
	! Executing "docker container inspect old-k8s-version-20210915033621-22140 --format={{.State.Status}}" took an unusually long time: 2.3379579s
	* Restarting the docker service may improve performance.

                                                
                                                
** /stderr **
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-20210915033621-22140 -n old-k8s-version-20210915033621-22140
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-20210915033621-22140 -n old-k8s-version-20210915033621-22140: exit status 2 (6.2189219s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	! Executing "docker container inspect old-k8s-version-20210915033621-22140 --format={{.State.Status}}" took an unusually long time: 2.2278757s
	* Restarting the docker service may improve performance.

                                                
                                                
** /stderr **
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p old-k8s-version-20210915033621-22140 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe unpause -p old-k8s-version-20210915033621-22140 --alsologtostderr -v=1: (10.6698536s)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-20210915033621-22140 -n old-k8s-version-20210915033621-22140
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-20210915033621-22140 -n old-k8s-version-20210915033621-22140: (8.1083284s)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-20210915033621-22140 -n old-k8s-version-20210915033621-22140
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-20210915033621-22140 -n old-k8s-version-20210915033621-22140: (7.3333655s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (55.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (23.77s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context no-preload-20210915034542-22140 create -f testdata\busybox.yaml
start_stop_delete_test.go:181: (dbg) Done: kubectl --context no-preload-20210915034542-22140 create -f testdata\busybox.yaml: (1.4434284s)
start_stop_delete_test.go:181: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [ab564121-bfb1-4923-94fd-9d94283e5e0b] Pending
helpers_test.go:343: "busybox" [ab564121-bfb1-4923-94fd-9d94283e5e0b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
helpers_test.go:343: "busybox" [ab564121-bfb1-4923-94fd-9d94283e5e0b] Running
start_stop_delete_test.go:181: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 20.2405513s
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context no-preload-20210915034542-22140 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:181: (dbg) Done: kubectl --context no-preload-20210915034542-22140 exec busybox -- /bin/sh -c "ulimit -n": (1.9906131s)
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (23.77s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (362.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-20210915040258-22140 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.22.2-rc.0
E0915 04:03:06.812509   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210915030334-22140\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe start -p newest-cni-20210915040258-22140 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.22.2-rc.0: (6m2.2793266s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (362.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (24.77s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-20210915034542-22140 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0915 04:03:21.191482   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915015618-22140\client.crt: The system cannot find the path specified.
E0915 04:03:22.125198   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915013001-22140\client.crt: The system cannot find the path specified.
E0915 04:03:23.723305   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210915030334-22140\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:190: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-20210915034542-22140 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (23.9036728s)
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context no-preload-20210915034542-22140 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (24.77s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (31.75s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-windows-amd64.exe stop -p no-preload-20210915034542-22140 --alsologtostderr -v=3
start_stop_delete_test.go:213: (dbg) Done: out/minikube-windows-amd64.exe stop -p no-preload-20210915034542-22140 --alsologtostderr -v=3: (31.7513999s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (31.75s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (3.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20210915034542-22140 -n no-preload-20210915034542-22140
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20210915034542-22140 -n no-preload-20210915034542-22140: exit status 7 (1.6059679s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p no-preload-20210915034542-22140 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:231: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p no-preload-20210915034542-22140 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (1.5142742s)
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (3.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (18.74s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-20210915040258-22140 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:190: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-20210915040258-22140 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (18.7387191s)
start_stop_delete_test.go:196: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (18.74s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (32.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-windows-amd64.exe stop -p newest-cni-20210915040258-22140 --alsologtostderr -v=3
E0915 04:09:44.276206   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915015618-22140\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:213: (dbg) Done: out/minikube-windows-amd64.exe stop -p newest-cni-20210915040258-22140 --alsologtostderr -v=3: (32.2133276s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (32.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (4.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20210915040258-22140 -n newest-cni-20210915040258-22140
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20210915040258-22140 -n newest-cni-20210915040258-22140: exit status 7 (2.7457274s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	! Executing "docker container inspect newest-cni-20210915040258-22140 --format={{.State.Status}}" took an unusually long time: 2.155442s
	* Restarting the docker service may improve performance.

                                                
                                                
** /stderr **
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p newest-cni-20210915040258-22140 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:231: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p newest-cni-20210915040258-22140 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (1.302289s)
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (4.05s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (235.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-20210915040258-22140 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.22.2-rc.0
E0915 04:11:12.963614   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\old-k8s-version-20210915033621-22140\client.crt: The system cannot find the path specified.
E0915 04:11:40.664314   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\old-k8s-version-20210915033621-22140\client.crt: The system cannot find the path specified.
E0915 04:13:21.190183   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915015618-22140\client.crt: The system cannot find the path specified.
E0915 04:13:22.128407   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915013001-22140\client.crt: The system cannot find the path specified.
E0915 04:13:23.720149   22140 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210915030334-22140\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-windows-amd64.exe start -p newest-cni-20210915040258-22140 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.22.2-rc.0: (3m48.0286858s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20210915040258-22140 -n newest-cni-20210915040258-22140

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20210915040258-22140 -n newest-cni-20210915040258-22140: (7.9183641s)
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (235.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-hc9bd" [a72b6963-9545-4645-94ad-21bbfa7f9a63] Running

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.2835341s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:258: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:269: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (6.76s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p newest-cni-20210915040258-22140 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Done: out/minikube-windows-amd64.exe ssh -p newest-cni-20210915040258-22140 "sudo crictl images -o json": (6.7589821s)
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (6.76s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-hc9bd" [a72b6963-9545-4645-94ad-21bbfa7f9a63] Running

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.1692589s
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context embed-certs-20210915034625-22140 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.98s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (48.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe pause -p newest-cni-20210915040258-22140 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe pause -p newest-cni-20210915040258-22140 --alsologtostderr -v=1: (15.596483s)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-20210915040258-22140 -n newest-cni-20210915040258-22140

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-20210915040258-22140 -n newest-cni-20210915040258-22140: exit status 2 (4.5269466s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-20210915040258-22140 -n newest-cni-20210915040258-22140

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-20210915040258-22140 -n newest-cni-20210915040258-22140: exit status 2 (6.8458686s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	! Executing "docker container inspect newest-cni-20210915040258-22140 --format={{.State.Status}}" took an unusually long time: 2.4034737s
	* Restarting the docker service may improve performance.

                                                
                                                
** /stderr **
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p newest-cni-20210915040258-22140 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe unpause -p newest-cni-20210915040258-22140 --alsologtostderr -v=1: (8.2079809s)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-20210915040258-22140 -n newest-cni-20210915040258-22140

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-20210915040258-22140 -n newest-cni-20210915040258-22140: (6.0217684s)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-20210915040258-22140 -n newest-cni-20210915040258-22140

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-20210915040258-22140 -n newest-cni-20210915040258-22140: (7.1380076s)
--- PASS: TestStartStop/group/newest-cni/serial/Pause (48.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (6.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p embed-certs-20210915034625-22140 "sudo crictl images -o json"
start_stop_delete_test.go:289: (dbg) Done: out/minikube-windows-amd64.exe ssh -p embed-certs-20210915034625-22140 "sudo crictl images -o json": (6.2247043s)
start_stop_delete_test.go:289: Found non-minikube image: busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (6.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (56.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe pause -p embed-certs-20210915034625-22140 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe pause -p embed-certs-20210915034625-22140 --alsologtostderr -v=1: (11.8983262s)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-20210915034625-22140 -n embed-certs-20210915034625-22140

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-20210915034625-22140 -n embed-certs-20210915034625-22140: exit status 2 (6.5167257s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
** stderr ** 
	! Executing "docker container inspect embed-certs-20210915034625-22140 --format={{.State.Status}}" took an unusually long time: 2.2884541s
	* Restarting the docker service may improve performance.

                                                
                                                
** /stderr **
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-20210915034625-22140 -n embed-certs-20210915034625-22140

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-20210915034625-22140 -n embed-certs-20210915034625-22140: exit status 2 (5.7168214s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	! Executing "docker container inspect embed-certs-20210915034625-22140 --format={{.State.Status}}" took an unusually long time: 2.2158576s
	* Restarting the docker service may improve performance.

                                                
                                                
** /stderr **
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p embed-certs-20210915034625-22140 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe unpause -p embed-certs-20210915034625-22140 --alsologtostderr -v=1: (12.9963291s)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-20210915034625-22140 -n embed-certs-20210915034625-22140

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-20210915034625-22140 -n embed-certs-20210915034625-22140: (10.9309831s)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-20210915034625-22140 -n embed-certs-20210915034625-22140
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-20210915034625-22140 -n embed-certs-20210915034625-22140: (8.0261416s)
--- PASS: TestStartStop/group/embed-certs/serial/Pause (56.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-6jgml" [fc2419d9-07fc-4d20-999a-12fc0a12430c] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.2184129s
--- PASS: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (5.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (6.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-6jgml" [fc2419d9-07fc-4d20-999a-12fc0a12430c] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.1029576s
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context default-k8s-different-port-20210915034637-22140 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:276: (dbg) Done: kubectl --context default-k8s-different-port-20210915034637-22140 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: (1.3241606s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (6.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (7.87s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p default-k8s-different-port-20210915034637-22140 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Done: out/minikube-windows-amd64.exe ssh -p default-k8s-different-port-20210915034637-22140 "sudo crictl images -o json": (7.8699337s)
start_stop_delete_test.go:289: Found non-minikube image: busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (7.87s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (53.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe pause -p default-k8s-different-port-20210915034637-22140 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe pause -p default-k8s-different-port-20210915034637-22140 --alsologtostderr -v=1: (23.3382377s)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-different-port-20210915034637-22140 -n default-k8s-different-port-20210915034637-22140
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-different-port-20210915034637-22140 -n default-k8s-different-port-20210915034637-22140: exit status 2 (4.8959504s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-different-port-20210915034637-22140 -n default-k8s-different-port-20210915034637-22140
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-different-port-20210915034637-22140 -n default-k8s-different-port-20210915034637-22140: exit status 2 (6.4534892s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	! Executing "docker container inspect default-k8s-different-port-20210915034637-22140 --format={{.State.Status}}" took an unusually long time: 2.416365s
	* Restarting the docker service may improve performance.

                                                
                                                
** /stderr **
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p default-k8s-different-port-20210915034637-22140 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe unpause -p default-k8s-different-port-20210915034637-22140 --alsologtostderr -v=1: (7.935763s)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-different-port-20210915034637-22140 -n default-k8s-different-port-20210915034637-22140

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-different-port-20210915034637-22140 -n default-k8s-different-port-20210915034637-22140: (5.917972s)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-different-port-20210915034637-22140 -n default-k8s-different-port-20210915034637-22140

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-different-port-20210915034637-22140 -n default-k8s-different-port-20210915034637-22140: (5.2616787s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Pause (53.81s)

                                                
                                    

Test skip (22/232)

x
+
TestDownloadOnly/v1.14.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/cached-images
aaa_download_only_test.go:120: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.14.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.14.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.1/cached-images
aaa_download_only_test.go:120: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.22.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.1/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.22.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.2-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.2-rc.0/cached-images
aaa_download_only_test.go:120: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.22.2-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.2-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.2-rc.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.22.2-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (66.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:253: registry stabilized in 132.6779ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:255: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:343: "registry-vcnr4" [b82740ef-3be8-4a6c-90bb-5dfc98bc78c1] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:255: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.2121436s
addons_test.go:258: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:343: "registry-proxy-h7l25" [cb7460f6-df0a-4c34-8882-3c889fc96576] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:258: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.1188732s
addons_test.go:263: (dbg) Run:  kubectl --context addons-20210915013001-22140 delete po -l run=registry-test --now
addons_test.go:268: (dbg) Run:  kubectl --context addons-20210915013001-22140 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:268: (dbg) Done: kubectl --context addons-20210915013001-22140 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (55.775584s)
addons_test.go:278: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (66.82s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:42: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:115: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:188: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:977: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-20210915015618-22140 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:988: output didn't produce a URL
functional_test.go:982: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-20210915015618-22140 --alsologtostderr -v=1] ...
helpers_test.go:489: unable to find parent, assuming dead: process does not exist
--- SKIP: TestFunctional/parallel/DashboardCmd (300.01s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:59: skipping: mount broken on windows: https://github.com/kubernetes/minikube/issues/8303
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (50.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1477: (dbg) Run:  kubectl --context functional-20210915015618-22140 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1483: (dbg) Run:  kubectl --context functional-20210915015618-22140 expose deployment hello-node --type=NodePort --port=8080

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1488: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:343: "hello-node-6cbfcd7cbc-qcw6b" [d0f03f27-657f-4faa-aca1-20b5cd736976] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:343: "hello-node-6cbfcd7cbc-qcw6b" [d0f03f27-657f-4faa-aca1-20b5cd736976] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1488: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 43.2010878s
functional_test.go:1492: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915015618-22140 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1492: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915015618-22140 service list: (5.7433848s)
functional_test.go:1501: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmd (50.02s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:647: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:193: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:35: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:78: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (7.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-20210915034629-22140" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p disable-driver-mounts-20210915034629-22140

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p disable-driver-mounts-20210915034629-22140: (7.1386868s)
--- SKIP: TestStartStop/group/disable-driver-mounts (7.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (8.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:77: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:176: Cleaning up "flannel-20210915032655-22140" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p flannel-20210915032655-22140

                                                
                                                
=== CONT  TestNetworkPlugins/group/flannel
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p flannel-20210915032655-22140: (8.7643509s)
--- SKIP: TestNetworkPlugins/group/flannel (8.76s)

                                                
                                    
Copied to clipboard