Test Report: Docker_Linux_containerd 12739

                    
                      65d4aecda65478a8e2ccd637d75539d3b6910207:2021-10-20:20973
                    
                

Test fail (5/269)

Order failed test Duration
37 TestAddons/serial/GCPAuth 491.43
76 TestFunctional/serial/LogsFileCmd 1
195 TestScheduledStopUnix 89.34
221 TestPause/serial/PauseAgain 19.05
328 TestStartStop/group/embed-certs/serial/Pause 6.24
x
+
TestAddons/serial/GCPAuth (491.43s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth
addons_test.go:616: (dbg) Run:  kubectl --context addons-20211019231643-6808 create -f testdata/busybox.yaml
addons_test.go:622: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [67548721-bfc5-4876-af77-44bdfbd3e678] Pending
helpers_test.go:342: "busybox" [67548721-bfc5-4876-af77-44bdfbd3e678] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [67548721-bfc5-4876-af77-44bdfbd3e678] Running
addons_test.go:622: (dbg) TestAddons/serial/GCPAuth: integration-test=busybox healthy within 8.006613082s
addons_test.go:628: (dbg) Run:  kubectl --context addons-20211019231643-6808 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:665: (dbg) Run:  kubectl --context addons-20211019231643-6808 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:681: (dbg) Run:  kubectl --context addons-20211019231643-6808 apply -f testdata/private-image.yaml
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image" in namespace "default" ...
helpers_test.go:342: "private-image-7ff9c8c74f-4ttlt" [3b1faae0-8021-458a-80e6-050c71b305b7] Pending
helpers_test.go:342: "private-image-7ff9c8c74f-4ttlt" [3b1faae0-8021-458a-80e6-050c71b305b7] Pending / Ready:ContainersNotReady (containers with unready status: [private-image]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image])
addons_test.go:688: ***** TestAddons/serial/GCPAuth: pod "integration-test=private-image" failed to start within 8m0s: timed out waiting for the condition ****
addons_test.go:688: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-20211019231643-6808 -n addons-20211019231643-6808
addons_test.go:688: TestAddons/serial/GCPAuth: showing logs for failed pods as of 2021-10-19 23:28:05.532072492 +0000 UTC m=+723.048611935
addons_test.go:688: (dbg) Run:  kubectl --context addons-20211019231643-6808 describe po private-image-7ff9c8c74f-4ttlt -n default
addons_test.go:688: (dbg) kubectl --context addons-20211019231643-6808 describe po private-image-7ff9c8c74f-4ttlt -n default:
Name:         private-image-7ff9c8c74f-4ttlt
Namespace:    default
Priority:     0
Node:         addons-20211019231643-6808/192.168.49.2
Start Time:   Tue, 19 Oct 2021 23:20:05 +0000
Labels:       integration-test=private-image
pod-template-hash=7ff9c8c74f
Annotations:  <none>
Status:       Pending
IP:           10.244.0.37
IPs:
IP:           10.244.0.37
Controlled By:  ReplicaSet/private-image-7ff9c8c74f
Containers:
private-image:
Container ID:   
Image:          us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver:1.4
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      k8s-minikube
GCP_PROJECT:                     k8s-minikube
GCLOUD_PROJECT:                  k8s-minikube
GOOGLE_CLOUD_PROJECT:            k8s-minikube
CLOUDSDK_CORE_PROJECT:           k8s-minikube
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wj62d (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
kube-api-access-wj62d:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         BestEffort
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  8m                      default-scheduler  Successfully assigned default/private-image-7ff9c8c74f-4ttlt to addons-20211019231643-6808
Normal   Pulling    6m27s (x4 over 8m)      kubelet            Pulling image "us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver:1.4"
Warning  Failed     6m27s (x4 over 8m)      kubelet            Failed to pull image "us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver:1.4": rpc error: code = Unknown desc = failed to pull and unpack image "us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver:1.4": failed to resolve reference "us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver:1.4": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden
Warning  Failed     6m27s (x4 over 8m)      kubelet            Error: ErrImagePull
Warning  Failed     6m13s (x6 over 7m59s)   kubelet            Error: ImagePullBackOff
Normal   BackOff    2m45s (x21 over 7m59s)  kubelet            Back-off pulling image "us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver:1.4"
addons_test.go:688: (dbg) Run:  kubectl --context addons-20211019231643-6808 logs private-image-7ff9c8c74f-4ttlt -n default
addons_test.go:688: (dbg) Non-zero exit: kubectl --context addons-20211019231643-6808 logs private-image-7ff9c8c74f-4ttlt -n default: exit status 1 (81.090977ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "private-image" in pod "private-image-7ff9c8c74f-4ttlt" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:688: kubectl --context addons-20211019231643-6808 logs private-image-7ff9c8c74f-4ttlt -n default: exit status 1
addons_test.go:690: wait for private image: integration-test=private-image within 8m0s: timed out waiting for the condition
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/serial/GCPAuth]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-20211019231643-6808
helpers_test.go:235: (dbg) docker inspect addons-20211019231643-6808:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8f162263d5cf8b69455b5bc6a8040d992de0577c0c38d7210512b17b45642358",
	        "Created": "2021-10-19T23:16:51.842068343Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 8379,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-10-19T23:16:52.437990054Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e2a6c047beddf8261495222adf87089305bbc18e350587b01ebe3725535b5871",
	        "ResolvConfPath": "/var/lib/docker/containers/8f162263d5cf8b69455b5bc6a8040d992de0577c0c38d7210512b17b45642358/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8f162263d5cf8b69455b5bc6a8040d992de0577c0c38d7210512b17b45642358/hostname",
	        "HostsPath": "/var/lib/docker/containers/8f162263d5cf8b69455b5bc6a8040d992de0577c0c38d7210512b17b45642358/hosts",
	        "LogPath": "/var/lib/docker/containers/8f162263d5cf8b69455b5bc6a8040d992de0577c0c38d7210512b17b45642358/8f162263d5cf8b69455b5bc6a8040d992de0577c0c38d7210512b17b45642358-json.log",
	        "Name": "/addons-20211019231643-6808",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-20211019231643-6808:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-20211019231643-6808",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7add440d339fff7cf0f67e51388ab0932bcba8864db777ada90b8bc7c36fb6f1-init/diff:/var/lib/docker/overlay2/c5f02e45537672705e6e2aba01e28e51c2d46e9e49272769c2467ad01e3a8798/diff:/var/lib/docker/overlay2/0ea8f075729b71baf4ccd823eb0c485ef5ef29bac0353c53e6065129629066ba/diff:/var/lib/docker/overlay2/88533b581a0009f254ebbf2a93788f6fea438104765bb8cd83e8242f6f007976/diff:/var/lib/docker/overlay2/14bd4aa948d82ba23130a19a3ed9b4398666fdb3b6961d42dfbea97f89c5cc8d/diff:/var/lib/docker/overlay2/c45d302040f8b5fa2a3863603e33a4a8945fc269c7c7d5b5fed4c0af74e1a318/diff:/var/lib/docker/overlay2/895bf38a811e31bbe8c73359865eb48ad82bbb675c072510dec392e208b25255/diff:/var/lib/docker/overlay2/888eda45177c40a5e8c507f72bfcd0e63d9fb3fbf18304e303b30f7858d75385/diff:/var/lib/docker/overlay2/1ba6be2d4915efd99d70838596bc306c3a5986d1687b7b5c82e3bcce08f2739a/diff:/var/lib/docker/overlay2/dba40492bd25835ae3cae4014121bd7743a6df6c6b6f568b50a45660d36da1ee/diff:/var/lib/docker/overlay2/f5e2e9
f4ecf2e4c242e9a2d0f40f87313ae18433e49a64f8e612bc21342ba35a/diff:/var/lib/docker/overlay2/55c3bd632609db28efc8e436188a29be66e40c950a2f07ba9a636848bcf8fe54/diff:/var/lib/docker/overlay2/e743e9a12912c11573135dbbc7915737001fecbbddc57c8609bb97c48e477ab5/diff:/var/lib/docker/overlay2/0d2ab28eea343e33cf288e808e3583a4ef3a2d0f0e152cb4b6a696fe2829aa37/diff:/var/lib/docker/overlay2/a9d9379f41d6df50597861e1eca41e6abb25962190c7c4bf07f9b6cecd398ab8/diff:/var/lib/docker/overlay2/246f3a84f3d47634be292753e6ab114ff0fdd6c19174f98d20d3631b985bd65a/diff:/var/lib/docker/overlay2/1555899ff98b2aefb432f4f7413f555e1419a0c7f1ff8906ff527987be55f98f/diff:/var/lib/docker/overlay2/2046d76f6555966732657251ae8f140fcb5c9ba7becf9f4ea2f38d02553b81a8/diff:/var/lib/docker/overlay2/45481a57311300993df1fbf7c1affd06d3f1b0b4f3d4786b38e20ed6419a53d4/diff:/var/lib/docker/overlay2/3e461aa0d9087cb03e88430b7acc263fdb5b7c9685b89e24b2e636c0b764ff95/diff:/var/lib/docker/overlay2/fd0a889ad13f00ae1f69b1916a5927927a49c38948b00d54db2145eed106176b/diff:/var/lib/d
ocker/overlay2/b9ba91322e89a003f05f9d5f11671de5af7cda4550b6f60c9d4ff90724e4e6f9/diff:/var/lib/docker/overlay2/e778c4cc31a31336a7ba77c6088ba8748d9c140197750d9e6c90b127c01e530b/diff:/var/lib/docker/overlay2/95fbf604ebdd8cc367f642230ac0a68cbc72994b41f02db95f354c4fdc75af24/diff:/var/lib/docker/overlay2/3f2110c7a6beb28d94b23699e2ca6363aa4ea63d43035d2e539e47fcd331a2ba/diff:/var/lib/docker/overlay2/1bf0ef7d7a45ab3ea27aa3d2f6b34dce6e59fc18e7e464e718352e8ea7029fd8/diff:/var/lib/docker/overlay2/23c70a1791e26b5ae01282413205d77ba4ecd95b3e170ea40176e856715a9359/diff:/var/lib/docker/overlay2/d830811487929eab10415090b3d97a6d7865d450267b84aa4184392814d8b1df/diff:/var/lib/docker/overlay2/b36fdfbf0198d3321dcce07dba3a6910f2cbe2dc276d6530613f56cd94afcc31/diff:/var/lib/docker/overlay2/cc70ad4a43336c6e67e06e2c9e0a2f30140594f1db80424144ea2b077c4ab1af/diff:/var/lib/docker/overlay2/d728e7526151cbb5fa43fbde71e65b654ce3b44983512cd2e690c8798d0d3ef8/diff:/var/lib/docker/overlay2/d701d06f30bc941260abb1757ee832c9aa2195a14f1552bd9d4baf5e614
4d5ad/diff:/var/lib/docker/overlay2/6bd5ed3108886521103186b5318e03a8d90f0dbc80d27990929d83cd889e533d/diff:/var/lib/docker/overlay2/e27df6450133558da7555ef518af1a37ae6a07ce47c6767fec076ca914b56d50/diff:/var/lib/docker/overlay2/f820866aa600bd3aaac0b53c24ed1df712be2f6192e572cb551d67d8ed5f2ae8/diff:/var/lib/docker/overlay2/867ec5e44c5a5ca3a4bbcb5e3ebe23023fd7c7f42cc297619dd37fa19e58a4ce/diff:/var/lib/docker/overlay2/1de77d9dd95b7ad0b72d0e674dbf02cf40000662dc54842d9d003febd75e9183/diff:/var/lib/docker/overlay2/8249f5dae9e196bbeeca2cec8401a9bd3dc73ea9eb3884270fea296047dc72e1/diff:/var/lib/docker/overlay2/d1a43bd2c1485a2051100b28c50ca4afb530e7a9cace2b7ed1bb19098a8b1b6c/diff:/var/lib/docker/overlay2/e5626256f4126d2d314b1737c78f12ceabf819f05f933b8539d23c83ed360571/diff:/var/lib/docker/overlay2/0e28b1b6d42bc8ec33754e6a4d94556573199f71a1745d89b48ecf4e53c4b9d7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7add440d339fff7cf0f67e51388ab0932bcba8864db777ada90b8bc7c36fb6f1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7add440d339fff7cf0f67e51388ab0932bcba8864db777ada90b8bc7c36fb6f1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7add440d339fff7cf0f67e51388ab0932bcba8864db777ada90b8bc7c36fb6f1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-20211019231643-6808",
	                "Source": "/var/lib/docker/volumes/addons-20211019231643-6808/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-20211019231643-6808",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.27-1633027942-12081@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-20211019231643-6808",
	                "name.minikube.sigs.k8s.io": "addons-20211019231643-6808",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2655abc7c3a43ba9391da164478dcd2ba21d0093579106519be56e319058e1e2",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/2655abc7c3a4",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-20211019231643-6808": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "8f162263d5cf"
	                    ],
	                    "NetworkID": "9fe80a335fe7f5271ba5261451f969b8d4d0d2b536d222ce7822241de2678592",
	                    "EndpointID": "b8ba744e5c8bad855fab98e45597ee921a7d438f3589f02ba4e306ef9561fca0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-20211019231643-6808 -n addons-20211019231643-6808
helpers_test.go:244: <<< TestAddons/serial/GCPAuth FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/serial/GCPAuth]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-20211019231643-6808 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-20211019231643-6808 logs -n 25: (1.027349338s)
helpers_test.go:252: TestAddons/serial/GCPAuth logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-------------------------------------|-------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                Args                 |               Profile               |  User   | Version |          Start Time           |           End Time            |
	|---------|-------------------------------------|-------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | --all                               | download-only-20211019231602-6808   | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:16:30 UTC | Tue, 19 Oct 2021 23:16:31 UTC |
	| delete  | -p                                  | download-only-20211019231602-6808   | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:16:31 UTC | Tue, 19 Oct 2021 23:16:31 UTC |
	|         | download-only-20211019231602-6808   |                                     |         |         |                               |                               |
	| delete  | -p                                  | download-only-20211019231602-6808   | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:16:31 UTC | Tue, 19 Oct 2021 23:16:31 UTC |
	|         | download-only-20211019231602-6808   |                                     |         |         |                               |                               |
	| delete  | -p                                  | download-docker-20211019231631-6808 | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:16:40 UTC | Tue, 19 Oct 2021 23:16:42 UTC |
	|         | download-docker-20211019231631-6808 |                                     |         |         |                               |                               |
	| start   | -p addons-20211019231643-6808       | addons-20211019231643-6808          | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:16:43 UTC | Tue, 19 Oct 2021 23:18:50 UTC |
	|         | --wait=true --memory=4000           |                                     |         |         |                               |                               |
	|         | --alsologtostderr                   |                                     |         |         |                               |                               |
	|         | --addons=registry                   |                                     |         |         |                               |                               |
	|         | --addons=metrics-server             |                                     |         |         |                               |                               |
	|         | --addons=olm                        |                                     |         |         |                               |                               |
	|         | --addons=volumesnapshots            |                                     |         |         |                               |                               |
	|         | --addons=csi-hostpath-driver        |                                     |         |         |                               |                               |
	|         | --driver=docker                     |                                     |         |         |                               |                               |
	|         | --container-runtime=containerd      |                                     |         |         |                               |                               |
	|         | --addons=ingress                    |                                     |         |         |                               |                               |
	|         | --addons=ingress-dns                |                                     |         |         |                               |                               |
	|         | --addons=helm-tiller                |                                     |         |         |                               |                               |
	| -p      | addons-20211019231643-6808          | addons-20211019231643-6808          | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:18:50 UTC | Tue, 19 Oct 2021 23:18:50 UTC |
	|         | addons enable gcp-auth              |                                     |         |         |                               |                               |
	| -p      | addons-20211019231643-6808          | addons-20211019231643-6808          | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:18:51 UTC | Tue, 19 Oct 2021 23:18:55 UTC |
	|         | addons enable gcp-auth --force      |                                     |         |         |                               |                               |
	| -p      | addons-20211019231643-6808          | addons-20211019231643-6808          | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:19:01 UTC | Tue, 19 Oct 2021 23:19:02 UTC |
	|         | addons disable metrics-server       |                                     |         |         |                               |                               |
	|         | --alsologtostderr -v=1              |                                     |         |         |                               |                               |
	| -p      | addons-20211019231643-6808          | addons-20211019231643-6808          | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:19:07 UTC | Tue, 19 Oct 2021 23:19:08 UTC |
	|         | addons disable helm-tiller          |                                     |         |         |                               |                               |
	|         | --alsologtostderr -v=1              |                                     |         |         |                               |                               |
	| -p      | addons-20211019231643-6808 ip       | addons-20211019231643-6808          | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:19:16 UTC | Tue, 19 Oct 2021 23:19:17 UTC |
	| -p      | addons-20211019231643-6808          | addons-20211019231643-6808          | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:19:17 UTC | Tue, 19 Oct 2021 23:19:17 UTC |
	|         | addons disable registry             |                                     |         |         |                               |                               |
	|         | --alsologtostderr -v=1              |                                     |         |         |                               |                               |
	| -p      | addons-20211019231643-6808 ssh      | addons-20211019231643-6808          | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:19:24 UTC | Tue, 19 Oct 2021 23:19:25 UTC |
	|         | curl -s http://127.0.0.1/ -H        |                                     |         |         |                               |                               |
	|         | 'Host: nginx.example.com'           |                                     |         |         |                               |                               |
	| -p      | addons-20211019231643-6808 ip       | addons-20211019231643-6808          | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:19:25 UTC | Tue, 19 Oct 2021 23:19:25 UTC |
	| -p      | addons-20211019231643-6808          | addons-20211019231643-6808          | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:19:25 UTC | Tue, 19 Oct 2021 23:19:26 UTC |
	|         | addons disable ingress-dns          |                                     |         |         |                               |                               |
	|         | --alsologtostderr -v=1              |                                     |         |         |                               |                               |
	| -p      | addons-20211019231643-6808          | addons-20211019231643-6808          | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:19:48 UTC | Tue, 19 Oct 2021 23:19:55 UTC |
	|         | addons disable                      |                                     |         |         |                               |                               |
	|         | csi-hostpath-driver                 |                                     |         |         |                               |                               |
	|         | --alsologtostderr -v=1              |                                     |         |         |                               |                               |
	| -p      | addons-20211019231643-6808          | addons-20211019231643-6808          | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:19:26 UTC | Tue, 19 Oct 2021 23:19:56 UTC |
	|         | addons disable ingress              |                                     |         |         |                               |                               |
	|         | --alsologtostderr -v=1              |                                     |         |         |                               |                               |
	| -p      | addons-20211019231643-6808          | addons-20211019231643-6808          | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:19:55 UTC | Tue, 19 Oct 2021 23:19:56 UTC |
	|         | addons disable volumesnapshots      |                                     |         |         |                               |                               |
	|         | --alsologtostderr -v=1              |                                     |         |         |                               |                               |
	|---------|-------------------------------------|-------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/10/19 23:16:43
	Running on machine: debian-jenkins-agent-14
	Binary: Built with gc go1.17.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 23:16:43.681115    7734 out.go:297] Setting OutFile to fd 1 ...
	I1019 23:16:43.681289    7734 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1019 23:16:43.681297    7734 out.go:310] Setting ErrFile to fd 2...
	I1019 23:16:43.681302    7734 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1019 23:16:43.681400    7734 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/bin
	I1019 23:16:43.681644    7734 out.go:304] Setting JSON to false
	I1019 23:16:43.715640    7734 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":3566,"bootTime":1634681838,"procs":138,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I1019 23:16:43.715741    7734 start.go:121] virtualization: kvm guest
	I1019 23:16:45.265555    7734 out.go:176] * [addons-20211019231643-6808] minikube v1.23.2 on Debian 9.13 (kvm/amd64)
	I1019 23:16:45.849694    7734 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/kubeconfig
	I1019 23:16:45.265737    7734 notify.go:169] Checking for updates...
	I1019 23:16:46.602197    7734 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 23:16:47.321819    7734 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube
	I1019 23:16:48.319668    7734 out.go:176]   - MINIKUBE_LOCATION=12739
	I1019 23:16:48.319957    7734 driver.go:343] Setting default libvirt URI to qemu:///system
	I1019 23:16:48.368042    7734 docker.go:132] docker version: linux-19.03.15
	I1019 23:16:48.368637    7734 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1019 23:16:49.472732    7734 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.104060266s)
	I1019 23:16:49.679262    7734 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:188 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:21 OomKillDisable:true NGoroutines:34 SystemTime:2021-10-19 23:16:48.400548962 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1019 23:16:49.679336    7734 docker.go:237] overlay module found
	I1019 23:16:49.680988    7734 out.go:176] * Using the docker driver based on user configuration
	I1019 23:16:49.681006    7734 start.go:278] selected driver: docker
	I1019 23:16:49.681011    7734 start.go:751] validating driver "docker" against <nil>
	I1019 23:16:49.681028    7734 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W1019 23:16:49.681065    7734 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W1019 23:16:49.681086    7734 out.go:241] ! Your cgroup does not allow setting memory.
	I1019 23:16:49.682507    7734 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I1019 23:16:49.683312    7734 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1019 23:16:49.761539    7734 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:188 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:21 OomKillDisable:true NGoroutines:34 SystemTime:2021-10-19 23:16:49.716489408 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1019 23:16:49.761622    7734 start_flags.go:266] no existing cluster config was found, will generate one from the flags 
	I1019 23:16:49.761754    7734 start_flags.go:748] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 23:16:49.761775    7734 cni.go:93] Creating CNI manager for ""
	I1019 23:16:49.761783    7734 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1019 23:16:49.761793    7734 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I1019 23:16:49.761799    7734 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I1019 23:16:49.761803    7734 start_flags.go:275] Found "CNI" CNI - setting NetworkPlugin=cni
	I1019 23:16:49.761811    7734 start_flags.go:280] config:
	{Name:addons-20211019231643-6808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.27-1633027942-12081@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.2 ClusterName:addons-20211019231643-6808 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host}
	I1019 23:16:49.763909    7734 out.go:176] * Starting control plane node addons-20211019231643-6808 in cluster addons-20211019231643-6808
	I1019 23:16:49.763945    7734 cache.go:118] Beginning downloading kic base image for docker with containerd
	I1019 23:16:49.765326    7734 out.go:176] * Pulling base image ...
	I1019 23:16:49.765353    7734 preload.go:130] Checking if preload exists for k8s version v1.22.2 and runtime containerd
	I1019 23:16:49.765380    7734 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.27-1633027942-12081@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1019 23:16:49.765383    7734 preload.go:146] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v13-v1.22.2-containerd-overlay2-amd64.tar.lz4
	I1019 23:16:49.765396    7734 cache.go:57] Caching tarball of preloaded images
	I1019 23:16:49.765612    7734 preload.go:172] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v13-v1.22.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1019 23:16:49.765629    7734 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.2 on containerd
	I1019 23:16:49.765871    7734 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/addons-20211019231643-6808/config.json ...
	I1019 23:16:49.765899    7734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/addons-20211019231643-6808/config.json: {Name:mk807cd8761c0b550b2fb2e7ecd8fd6fa7545aea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 23:16:49.858918    7734 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.27-1633027942-12081@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1019 23:16:49.858956    7734 cache.go:140] gcr.io/k8s-minikube/kicbase-builds:v0.0.27-1633027942-12081@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1019 23:16:49.858974    7734 cache.go:206] Successfully downloaded all kic artifacts
	I1019 23:16:49.859009    7734 start.go:313] acquiring machines lock for addons-20211019231643-6808: {Name:mke61498a0625ae81d99bbceefae773559dece1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 23:16:49.859138    7734 start.go:317] acquired machines lock for "addons-20211019231643-6808" in 107.821µs
	I1019 23:16:49.859161    7734 start.go:89] Provisioning new machine with config: &{Name:addons-20211019231643-6808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.27-1633027942-12081@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.2 ClusterName:addons-20211019231643-6808 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.2 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host} &{Name: IP: Port:8443 KubernetesVersion:v1.22.2 ControlPlane:true Worker:true}
	I1019 23:16:49.859253    7734 start.go:126] createHost starting for "" (driver="docker")
	I1019 23:16:49.861476    7734 out.go:203] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1019 23:16:49.861732    7734 start.go:160] libmachine.API.Create for "addons-20211019231643-6808" (driver="docker")
	I1019 23:16:49.861766    7734 client.go:168] LocalClient.Create starting
	I1019 23:16:49.861884    7734 main.go:130] libmachine: Creating CA: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/ca.pem
	I1019 23:16:49.966307    7734 main.go:130] libmachine: Creating client certificate: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/cert.pem
	I1019 23:16:50.118347    7734 cli_runner.go:115] Run: docker network inspect addons-20211019231643-6808 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1019 23:16:50.153562    7734 cli_runner.go:162] docker network inspect addons-20211019231643-6808 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1019 23:16:50.153621    7734 network_create.go:253] running [docker network inspect addons-20211019231643-6808] to gather additional debugging logs...
	I1019 23:16:50.153638    7734 cli_runner.go:115] Run: docker network inspect addons-20211019231643-6808
	W1019 23:16:50.188137    7734 cli_runner.go:162] docker network inspect addons-20211019231643-6808 returned with exit code 1
	I1019 23:16:50.188162    7734 network_create.go:256] error running [docker network inspect addons-20211019231643-6808]: docker network inspect addons-20211019231643-6808: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: addons-20211019231643-6808
	I1019 23:16:50.188174    7734 network_create.go:258] output of [docker network inspect addons-20211019231643-6808]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: addons-20211019231643-6808
	
	** /stderr **
	I1019 23:16:50.188217    7734 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 23:16:50.222309    7734 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0004b8b00] misses:0}
	I1019 23:16:50.222353    7734 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1019 23:16:50.222371    7734 network_create.go:106] attempt to create docker network addons-20211019231643-6808 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1019 23:16:50.222406    7734 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20211019231643-6808
	I1019 23:16:50.295219    7734 network_create.go:90] docker network addons-20211019231643-6808 192.168.49.0/24 created
	I1019 23:16:50.295249    7734 kic.go:106] calculated static IP "192.168.49.2" for the "addons-20211019231643-6808" container
	I1019 23:16:50.295299    7734 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1019 23:16:50.328988    7734 cli_runner.go:115] Run: docker volume create addons-20211019231643-6808 --label name.minikube.sigs.k8s.io=addons-20211019231643-6808 --label created_by.minikube.sigs.k8s.io=true
	I1019 23:16:50.365405    7734 oci.go:102] Successfully created a docker volume addons-20211019231643-6808
	I1019 23:16:50.365494    7734 cli_runner.go:115] Run: docker run --rm --name addons-20211019231643-6808-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20211019231643-6808 --entrypoint /usr/bin/test -v addons-20211019231643-6808:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.27-1633027942-12081@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1019 23:16:51.727557    7734 cli_runner.go:168] Completed: docker run --rm --name addons-20211019231643-6808-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20211019231643-6808 --entrypoint /usr/bin/test -v addons-20211019231643-6808:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.27-1633027942-12081@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib: (1.362011408s)
	I1019 23:16:51.727591    7734 oci.go:106] Successfully prepared a docker volume addons-20211019231643-6808
	W1019 23:16:51.727630    7734 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1019 23:16:51.727642    7734 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I1019 23:16:51.727693    7734 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1019 23:16:51.727704    7734 preload.go:130] Checking if preload exists for k8s version v1.22.2 and runtime containerd
	I1019 23:16:51.727728    7734 kic.go:179] Starting extracting preloaded images to volume ...
	I1019 23:16:51.727796    7734 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v13-v1.22.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-20211019231643-6808:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.27-1633027942-12081@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1019 23:16:51.806492    7734 cli_runner.go:115] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-20211019231643-6808 --name addons-20211019231643-6808 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20211019231643-6808 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-20211019231643-6808 --network addons-20211019231643-6808 --ip 192.168.49.2 --volume addons-20211019231643-6808:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.27-1633027942-12081@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c
	I1019 23:16:52.446304    7734 cli_runner.go:115] Run: docker container inspect addons-20211019231643-6808 --format={{.State.Running}}
	I1019 23:16:52.486377    7734 cli_runner.go:115] Run: docker container inspect addons-20211019231643-6808 --format={{.State.Status}}
	I1019 23:16:52.534715    7734 cli_runner.go:115] Run: docker exec addons-20211019231643-6808 stat /var/lib/dpkg/alternatives/iptables
	I1019 23:16:52.658434    7734 oci.go:302] the created container "addons-20211019231643-6808" has a running status.
	I1019 23:16:52.658467    7734 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/addons-20211019231643-6808/id_rsa...
	I1019 23:16:52.773042    7734 kic_runner.go:187] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/addons-20211019231643-6808/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1019 23:16:53.190102    7734 cli_runner.go:115] Run: docker container inspect addons-20211019231643-6808 --format={{.State.Status}}
	I1019 23:16:53.231216    7734 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1019 23:16:53.231242    7734 kic_runner.go:114] Args: [docker exec --privileged addons-20211019231643-6808 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1019 23:16:56.901943    7734 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v13-v1.22.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-20211019231643-6808:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.27-1633027942-12081@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: (5.174106326s)
	I1019 23:16:56.901969    7734 kic.go:188] duration metric: took 5.174240 seconds to extract preloaded images to volume
	I1019 23:16:56.902029    7734 cli_runner.go:115] Run: docker container inspect addons-20211019231643-6808 --format={{.State.Status}}
	I1019 23:16:56.938432    7734 machine.go:88] provisioning docker machine ...
	I1019 23:16:56.938468    7734 ubuntu.go:169] provisioning hostname "addons-20211019231643-6808"
	I1019 23:16:56.938545    7734 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211019231643-6808
	I1019 23:16:56.973466    7734 main.go:130] libmachine: Using SSH client type: native
	I1019 23:16:56.973674    7734 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0b00] 0x7a3be0 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1019 23:16:56.973695    7734 main.go:130] libmachine: About to run SSH command:
	sudo hostname addons-20211019231643-6808 && echo "addons-20211019231643-6808" | sudo tee /etc/hostname
	I1019 23:16:57.107913    7734 main.go:130] libmachine: SSH cmd err, output: <nil>: addons-20211019231643-6808
	
	I1019 23:16:57.108109    7734 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211019231643-6808
	I1019 23:16:57.144484    7734 main.go:130] libmachine: Using SSH client type: native
	I1019 23:16:57.144624    7734 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0b00] 0x7a3be0 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1019 23:16:57.144644    7734 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-20211019231643-6808' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-20211019231643-6808/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-20211019231643-6808' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 23:16:57.251205    7734 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I1019 23:16:57.251236    7734 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube}
	I1019 23:16:57.251258    7734 ubuntu.go:177] setting up certificates
	I1019 23:16:57.251268    7734 provision.go:83] configureAuth start
	I1019 23:16:57.251329    7734 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20211019231643-6808
	I1019 23:16:57.287066    7734 provision.go:138] copyHostCerts
	I1019 23:16:57.287130    7734 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cert.pem (1123 bytes)
	I1019 23:16:57.287233    7734 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/key.pem (1679 bytes)
	I1019 23:16:57.287362    7734 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/ca.pem (1082 bytes)
	I1019 23:16:57.287433    7734 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/ca-key.pem org=jenkins.addons-20211019231643-6808 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-20211019231643-6808]
	I1019 23:16:57.451418    7734 provision.go:172] copyRemoteCerts
	I1019 23:16:57.451472    7734 ssh_runner.go:152] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 23:16:57.451511    7734 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211019231643-6808
	I1019 23:16:57.487170    7734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/addons-20211019231643-6808/id_rsa Username:docker}
	I1019 23:16:57.566106    7734 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 23:16:57.583859    7734 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1019 23:16:57.598537    7734 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1019 23:16:57.612924    7734 provision.go:86] duration metric: configureAuth took 361.647455ms
	I1019 23:16:57.612945    7734 ubuntu.go:193] setting minikube options for container-runtime
	I1019 23:16:57.613082    7734 config.go:176] Loaded profile config "addons-20211019231643-6808": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.2
	I1019 23:16:57.613093    7734 machine.go:91] provisioned docker machine in 674.640758ms
	I1019 23:16:57.613099    7734 client.go:171] LocalClient.Create took 7.75131912s
	I1019 23:16:57.613113    7734 start.go:168] duration metric: libmachine.API.Create for "addons-20211019231643-6808" took 7.751383972s
	I1019 23:16:57.613119    7734 start.go:267] post-start starting for "addons-20211019231643-6808" (driver="docker")
	I1019 23:16:57.613124    7734 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 23:16:57.613155    7734 ssh_runner.go:152] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 23:16:57.613188    7734 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211019231643-6808
	I1019 23:16:57.648958    7734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/addons-20211019231643-6808/id_rsa Username:docker}
	I1019 23:16:57.726935    7734 ssh_runner.go:152] Run: cat /etc/os-release
	I1019 23:16:57.729427    7734 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1019 23:16:57.729447    7734 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 23:16:57.729454    7734 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1019 23:16:57.729459    7734 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I1019 23:16:57.729467    7734 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/addons for local assets ...
	I1019 23:16:57.729513    7734 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/files for local assets ...
	I1019 23:16:57.729533    7734 start.go:270] post-start completed in 116.40858ms
	I1019 23:16:57.729762    7734 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20211019231643-6808
	I1019 23:16:57.765459    7734 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/addons-20211019231643-6808/config.json ...
	I1019 23:16:57.765649    7734 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 23:16:57.765683    7734 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211019231643-6808
	I1019 23:16:57.800357    7734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/addons-20211019231643-6808/id_rsa Username:docker}
	I1019 23:16:57.877607    7734 start.go:129] duration metric: createHost completed in 8.018344938s
	I1019 23:16:57.877630    7734 start.go:80] releasing machines lock for "addons-20211019231643-6808", held for 8.0184814s
	I1019 23:16:57.877692    7734 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20211019231643-6808
	I1019 23:16:57.913839    7734 ssh_runner.go:152] Run: systemctl --version
	I1019 23:16:57.913875    7734 ssh_runner.go:152] Run: curl -sS -m 2 https://k8s.gcr.io/
	I1019 23:16:57.913892    7734 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211019231643-6808
	I1019 23:16:57.913916    7734 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211019231643-6808
	I1019 23:16:57.951873    7734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/addons-20211019231643-6808/id_rsa Username:docker}
	I1019 23:16:57.952685    7734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/addons-20211019231643-6808/id_rsa Username:docker}
	I1019 23:16:58.127830    7734 ssh_runner.go:152] Run: sudo systemctl stop -f crio
	I1019 23:16:58.138868    7734 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service crio
	I1019 23:16:58.146888    7734 docker.go:156] disabling docker service ...
	I1019 23:16:58.146923    7734 ssh_runner.go:152] Run: sudo systemctl stop -f docker.socket
	I1019 23:16:58.160841    7734 ssh_runner.go:152] Run: sudo systemctl stop -f docker.service
	I1019 23:16:58.168848    7734 ssh_runner.go:152] Run: sudo systemctl disable docker.socket
	I1019 23:16:58.231586    7734 ssh_runner.go:152] Run: sudo systemctl mask docker.service
	I1019 23:16:58.287692    7734 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service docker
	I1019 23:16:58.295675    7734 ssh_runner.go:152] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 23:16:58.306590    7734 ssh_runner.go:152] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwcm94eV9wbHVnaW5zXQojIGZ1c2Utb3ZlcmxheWZzIGlzIHVzZWQgZm9yIHJvb3RsZXNzCltwcm94eV9wbHVnaW5zLiJmdXNlLW92ZXJsYXlmcyJdCiAgdHlwZSA9ICJzbmFwc2hvdCIKICBhZGRyZXNzID0gIi9ydW4vY29udGFpbmVyZC1mdXNlLW92ZXJsYXlmcy5zb2NrIgoKW3BsdWdpbnNdCiAgW3BsdWdpbnMuY2dyb3Vwc10KICAgIG5vX3Byb21ldGhldXMgPSBmYWxzZQogIFtwbHVnaW5zLmNyaV0KICAgIHN0cmVhbV9zZXJ2ZXJfYWRkcmVzcyA9ICIiCiAgICBzdHJlYW1fc2VydmVyX3BvcnQgPSAiMTAwMTAiCiAgICBlbmFibGVfc2VsaW51eCA9IGZhbHNlCiAgICBzY
W5kYm94X2ltYWdlID0gIms4cy5nY3IuaW8vcGF1c2U6My41IgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKICAgIHJlc3RyaWN0X29vbV9zY29yZV9hZGogPSBmYWxzZQoKCVtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0KICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmRdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXNdCiAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jXQogICAgICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdCiAgICAgICAgICAgICAgU3lzdGVtZENncm91cCA9IGZhbHNlCgogICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmRdCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLnVud
HJ1c3RlZF93b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICBbcGx1Z2lucy5jcmkuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0Lm1rIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuZGlmZi1zZXJ2aWNlXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuc2NoZWR1bGVyXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I1019 23:16:58.317892    7734 ssh_runner.go:152] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 23:16:58.324567    7734 crio.go:137] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1019 23:16:58.324610    7734 ssh_runner.go:152] Run: sudo modprobe br_netfilter
	I1019 23:16:58.330657    7734 ssh_runner.go:152] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 23:16:58.336187    7734 ssh_runner.go:152] Run: sudo systemctl daemon-reload
	I1019 23:16:58.389197    7734 ssh_runner.go:152] Run: sudo systemctl restart containerd
	I1019 23:16:58.451328    7734 start.go:393] Will wait 60s for socket path /run/containerd/containerd.sock
	I1019 23:16:58.451425    7734 ssh_runner.go:152] Run: stat /run/containerd/containerd.sock
	I1019 23:16:58.454727    7734 start.go:414] Will wait 60s for crictl version
	I1019 23:16:58.454777    7734 ssh_runner.go:152] Run: sudo crictl version
	I1019 23:16:58.565637    7734 start.go:423] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.9
	RuntimeApiVersion:  v1alpha2
	I1019 23:16:58.565717    7734 ssh_runner.go:152] Run: containerd --version
	I1019 23:16:58.585975    7734 ssh_runner.go:152] Run: containerd --version
	I1019 23:16:58.607239    7734 out.go:176] * Preparing Kubernetes v1.22.2 on containerd 1.4.9 ...
	I1019 23:16:58.607305    7734 cli_runner.go:115] Run: docker network inspect addons-20211019231643-6808 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 23:16:58.641625    7734 ssh_runner.go:152] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1019 23:16:58.644766    7734 ssh_runner.go:152] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 23:16:58.653117    7734 preload.go:130] Checking if preload exists for k8s version v1.22.2 and runtime containerd
	I1019 23:16:58.653183    7734 ssh_runner.go:152] Run: sudo crictl images --output json
	I1019 23:16:58.676915    7734 containerd.go:635] all images are preloaded for containerd runtime.
	I1019 23:16:58.676933    7734 containerd.go:539] Images already preloaded, skipping extraction
	I1019 23:16:58.676971    7734 ssh_runner.go:152] Run: sudo crictl images --output json
	I1019 23:16:58.709779    7734 containerd.go:635] all images are preloaded for containerd runtime.
	I1019 23:16:58.709796    7734 cache_images.go:79] Images are preloaded, skipping loading
	I1019 23:16:58.709838    7734 ssh_runner.go:152] Run: sudo crictl info
	I1019 23:16:58.736358    7734 cni.go:93] Creating CNI manager for ""
	I1019 23:16:58.736376    7734 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1019 23:16:58.736385    7734 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1019 23:16:58.736397    7734 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.22.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-20211019231643-6808 NodeName:addons-20211019231643-6808 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var
/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I1019 23:16:58.736512    7734 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "addons-20211019231643-6808"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.22.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 23:16:58.736592    7734 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.22.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=addons-20211019231643-6808 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.22.2 ClusterName:addons-20211019231643-6808 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1019 23:16:58.736637    7734 ssh_runner.go:152] Run: sudo ls /var/lib/minikube/binaries/v1.22.2
	I1019 23:16:58.744417    7734 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 23:16:58.744469    7734 ssh_runner.go:152] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 23:16:58.750340    7734 ssh_runner.go:319] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (571 bytes)
	I1019 23:16:58.761166    7734 ssh_runner.go:319] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 23:16:58.772004    7734 ssh_runner.go:319] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2076 bytes)
	I1019 23:16:58.782663    7734 ssh_runner.go:152] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1019 23:16:58.785208    7734 ssh_runner.go:152] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 23:16:58.792972    7734 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/addons-20211019231643-6808 for IP: 192.168.49.2
	I1019 23:16:58.793004    7734 certs.go:187] generating minikubeCA CA: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/ca.key
	I1019 23:16:58.946680    7734 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/ca.crt ...
	I1019 23:16:58.946704    7734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/ca.crt: {Name:mk8e529c279a13af064ade0975e392434acfdc3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 23:16:58.946900    7734 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/ca.key ...
	I1019 23:16:58.946914    7734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/ca.key: {Name:mkdf7f0bc27430cb15de83e60a18104bcfea14dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 23:16:58.947018    7734 certs.go:187] generating proxyClientCA CA: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/proxy-client-ca.key
	I1019 23:16:59.173929    7734 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/proxy-client-ca.crt ...
	I1019 23:16:59.173958    7734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/proxy-client-ca.crt: {Name:mke949834dd849c3252ddf8d138f91ca0bd4bbe1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 23:16:59.174157    7734 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/proxy-client-ca.key ...
	I1019 23:16:59.174173    7734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/proxy-client-ca.key: {Name:mk6a1cd66b854ca972a49362feaaf3cd37cc7af7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 23:16:59.174331    7734 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/addons-20211019231643-6808/client.key
	I1019 23:16:59.174347    7734 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/addons-20211019231643-6808/client.crt with IP's: []
	I1019 23:16:59.309684    7734 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/addons-20211019231643-6808/client.crt ...
	I1019 23:16:59.309715    7734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/addons-20211019231643-6808/client.crt: {Name:mkfca4c5a15dce6b6c16a451f3cdc8335d7bd772 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 23:16:59.309891    7734 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/addons-20211019231643-6808/client.key ...
	I1019 23:16:59.309904    7734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/addons-20211019231643-6808/client.key: {Name:mk989362770db5869bef5203c37e00ae6026b718 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 23:16:59.309979    7734 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/addons-20211019231643-6808/apiserver.key.dd3b5fb2
	I1019 23:16:59.309997    7734 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/addons-20211019231643-6808/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1019 23:16:59.488114    7734 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/addons-20211019231643-6808/apiserver.crt.dd3b5fb2 ...
	I1019 23:16:59.488148    7734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/addons-20211019231643-6808/apiserver.crt.dd3b5fb2: {Name:mkb673097f8b2b610c8f318fe3dbfb811a15893d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 23:16:59.488333    7734 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/addons-20211019231643-6808/apiserver.key.dd3b5fb2 ...
	I1019 23:16:59.488346    7734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/addons-20211019231643-6808/apiserver.key.dd3b5fb2: {Name:mkbf911d8a85a1a2163a4bc56522ef1d8dc23325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 23:16:59.488431    7734 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/addons-20211019231643-6808/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/addons-20211019231643-6808/apiserver.crt
	I1019 23:16:59.488492    7734 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/addons-20211019231643-6808/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/addons-20211019231643-6808/apiserver.key
	I1019 23:16:59.488548    7734 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/addons-20211019231643-6808/proxy-client.key
	I1019 23:16:59.488566    7734 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/addons-20211019231643-6808/proxy-client.crt with IP's: []
	I1019 23:16:59.704142    7734 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/addons-20211019231643-6808/proxy-client.crt ...
	I1019 23:16:59.704173    7734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/addons-20211019231643-6808/proxy-client.crt: {Name:mk11d2916809ad022ee0feedf4b365ef8deb53d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 23:16:59.704381    7734 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/addons-20211019231643-6808/proxy-client.key ...
	I1019 23:16:59.704398    7734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/addons-20211019231643-6808/proxy-client.key: {Name:mk6e951d477f5db4e70850f9e8373e50ea9a91d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 23:16:59.704592    7734 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/ca-key.pem (1679 bytes)
	I1019 23:16:59.704633    7734 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/ca.pem (1082 bytes)
	I1019 23:16:59.704668    7734 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/cert.pem (1123 bytes)
	I1019 23:16:59.704723    7734 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/key.pem (1679 bytes)
	I1019 23:16:59.705612    7734 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/addons-20211019231643-6808/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1019 23:16:59.721632    7734 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/addons-20211019231643-6808/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1019 23:16:59.736929    7734 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/addons-20211019231643-6808/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 23:16:59.752277    7734 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/addons-20211019231643-6808/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1019 23:16:59.767052    7734 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 23:16:59.781650    7734 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1019 23:16:59.796240    7734 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 23:16:59.810543    7734 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1019 23:16:59.825125    7734 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 23:16:59.840547    7734 ssh_runner.go:319] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 23:16:59.851015    7734 ssh_runner.go:152] Run: openssl version
	I1019 23:16:59.860450    7734 ssh_runner.go:152] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 23:16:59.867983    7734 ssh_runner.go:152] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 23:16:59.870575    7734 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Oct 19 23:16 /usr/share/ca-certificates/minikubeCA.pem
	I1019 23:16:59.870624    7734 ssh_runner.go:152] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 23:16:59.874773    7734 ssh_runner.go:152] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 23:16:59.880905    7734 kubeadm.go:390] StartCluster: {Name:addons-20211019231643-6808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.27-1633027942-12081@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.2 ClusterName:addons-20211019231643-6808 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.2 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host}
	I1019 23:16:59.880988    7734 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1019 23:16:59.881030    7734 ssh_runner.go:152] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 23:16:59.903851    7734 cri.go:76] found id: ""
	I1019 23:16:59.903902    7734 ssh_runner.go:152] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 23:16:59.909931    7734 ssh_runner.go:152] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1019 23:16:59.915733    7734 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I1019 23:16:59.915764    7734 ssh_runner.go:152] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1019 23:16:59.921425    7734 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1019 23:16:59.921459    7734 ssh_runner.go:243] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1019 23:17:00.183031    7734 out.go:203]   - Generating certificates and keys ...
	I1019 23:17:03.251060    7734 out.go:203]   - Booting up control plane ...
	I1019 23:17:16.292790    7734 out.go:203]   - Configuring RBAC rules ...
	I1019 23:17:16.703862    7734 cni.go:93] Creating CNI manager for ""
	I1019 23:17:16.703882    7734 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1019 23:17:16.705820    7734 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I1019 23:17:16.705906    7734 ssh_runner.go:152] Run: stat /opt/cni/bin/portmap
	I1019 23:17:16.710161    7734 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.22.2/kubectl ...
	I1019 23:17:16.710180    7734 ssh_runner.go:319] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I1019 23:17:16.722148    7734 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1019 23:17:17.000902    7734 ssh_runner.go:152] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1019 23:17:17.000964    7734 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 23:17:17.000973    7734 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl label nodes minikube.k8s.io/version=v1.23.2 minikube.k8s.io/commit=4e20255d8fb01d11f0b16be976bdc5b5661029f6 minikube.k8s.io/name=addons-20211019231643-6808 minikube.k8s.io/updated_at=2021_10_19T23_17_17_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 23:17:17.055947    7734 ops.go:34] apiserver oom_adj: -16
	I1019 23:17:17.056034    7734 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 23:17:17.640419    7734 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 23:17:18.140380    7734 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 23:17:18.640355    7734 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 23:17:19.140625    7734 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 23:17:19.640611    7734 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 23:17:20.140376    7734 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 23:17:20.640309    7734 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 23:17:21.140283    7734 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 23:17:21.641019    7734 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 23:17:22.140536    7734 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 23:17:22.641002    7734 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 23:17:23.140900    7734 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 23:17:23.640349    7734 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 23:17:24.141004    7734 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 23:17:24.640274    7734 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 23:17:25.140644    7734 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 23:17:25.641163    7734 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 23:17:26.641146    7734 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 23:17:27.140557    7734 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 23:17:27.640767    7734 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 23:17:28.140547    7734 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 23:17:30.027211    7734 ssh_runner.go:192] Completed: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (1.886625853s)
	I1019 23:17:31.641062    7734 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 23:17:32.640507    7734 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 23:17:33.140364    7734 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 23:17:33.640617    7734 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 23:17:33.700116    7734 kubeadm.go:985] duration metric: took 16.699208314s to wait for elevateKubeSystemPrivileges.
	I1019 23:17:33.700148    7734 kubeadm.go:392] StartCluster complete in 33.819249894s
	I1019 23:17:33.700165    7734 settings.go:142] acquiring lock: {Name:mkae5b2371003bb83973886295163ea24b8b77a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 23:17:33.700305    7734 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/kubeconfig
	I1019 23:17:33.700646    7734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/kubeconfig: {Name:mk397e230cae60dd4f11434c0167f3868cd3be72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 23:17:34.222227    7734 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "addons-20211019231643-6808" rescaled to 1
	I1019 23:17:34.222280    7734 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.2 ControlPlane:true Worker:true}
	I1019 23:17:34.224425    7734 out.go:176] * Verifying Kubernetes components...
	I1019 23:17:34.222353    7734 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1019 23:17:34.222357    7734 addons.go:404] enableAddons start: toEnable=map[], additional=[registry metrics-server olm volumesnapshots csi-hostpath-driver ingress ingress-dns helm-tiller]
	I1019 23:17:34.222511    7734 config.go:176] Loaded profile config "addons-20211019231643-6808": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.2
	I1019 23:17:34.224504    7734 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
	I1019 23:17:34.224551    7734 addons.go:65] Setting volumesnapshots=true in profile "addons-20211019231643-6808"
	I1019 23:17:34.224568    7734 addons.go:65] Setting csi-hostpath-driver=true in profile "addons-20211019231643-6808"
	I1019 23:17:34.224580    7734 addons.go:65] Setting default-storageclass=true in profile "addons-20211019231643-6808"
	I1019 23:17:34.224584    7734 addons.go:65] Setting ingress-dns=true in profile "addons-20211019231643-6808"
	I1019 23:17:34.224592    7734 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-20211019231643-6808"
	I1019 23:17:34.224601    7734 addons.go:65] Setting metrics-server=true in profile "addons-20211019231643-6808"
	I1019 23:17:34.224607    7734 addons.go:153] Setting addon csi-hostpath-driver=true in "addons-20211019231643-6808"
	I1019 23:17:34.224614    7734 addons.go:65] Setting registry=true in profile "addons-20211019231643-6808"
	I1019 23:17:34.224644    7734 addons.go:65] Setting helm-tiller=true in profile "addons-20211019231643-6808"
	I1019 23:17:34.224654    7734 addons.go:153] Setting addon registry=true in "addons-20211019231643-6808"
	I1019 23:17:34.224667    7734 addons.go:153] Setting addon helm-tiller=true in "addons-20211019231643-6808"
	I1019 23:17:34.224682    7734 host.go:66] Checking if "addons-20211019231643-6808" exists ...
	I1019 23:17:34.224706    7734 host.go:66] Checking if "addons-20211019231643-6808" exists ...
	I1019 23:17:34.224573    7734 addons.go:153] Setting addon volumesnapshots=true in "addons-20211019231643-6808"
	I1019 23:17:34.224808    7734 host.go:66] Checking if "addons-20211019231643-6808" exists ...
	I1019 23:17:34.224968    7734 cli_runner.go:115] Run: docker container inspect addons-20211019231643-6808 --format={{.State.Status}}
	I1019 23:17:34.224554    7734 addons.go:65] Setting ingress=true in profile "addons-20211019231643-6808"
	I1019 23:17:34.225060    7734 addons.go:153] Setting addon ingress=true in "addons-20211019231643-6808"
	I1019 23:17:34.225087    7734 host.go:66] Checking if "addons-20211019231643-6808" exists ...
	I1019 23:17:34.225147    7734 cli_runner.go:115] Run: docker container inspect addons-20211019231643-6808 --format={{.State.Status}}
	I1019 23:17:34.225228    7734 cli_runner.go:115] Run: docker container inspect addons-20211019231643-6808 --format={{.State.Status}}
	I1019 23:17:34.224623    7734 addons.go:65] Setting storage-provisioner=true in profile "addons-20211019231643-6808"
	I1019 23:17:34.225265    7734 addons.go:153] Setting addon storage-provisioner=true in "addons-20211019231643-6808"
	W1019 23:17:34.225278    7734 addons.go:165] addon storage-provisioner should already be in state true
	I1019 23:17:34.225301    7734 host.go:66] Checking if "addons-20211019231643-6808" exists ...
	I1019 23:17:34.224594    7734 addons.go:153] Setting addon ingress-dns=true in "addons-20211019231643-6808"
	I1019 23:17:34.225359    7734 host.go:66] Checking if "addons-20211019231643-6808" exists ...
	I1019 23:17:34.225148    7734 cli_runner.go:115] Run: docker container inspect addons-20211019231643-6808 --format={{.State.Status}}
	I1019 23:17:34.225551    7734 cli_runner.go:115] Run: docker container inspect addons-20211019231643-6808 --format={{.State.Status}}
	I1019 23:17:34.224614    7734 addons.go:153] Setting addon metrics-server=true in "addons-20211019231643-6808"
	I1019 23:17:34.225727    7734 cli_runner.go:115] Run: docker container inspect addons-20211019231643-6808 --format={{.State.Status}}
	I1019 23:17:34.225744    7734 host.go:66] Checking if "addons-20211019231643-6808" exists ...
	I1019 23:17:34.224638    7734 host.go:66] Checking if "addons-20211019231643-6808" exists ...
	I1019 23:17:34.225829    7734 cli_runner.go:115] Run: docker container inspect addons-20211019231643-6808 --format={{.State.Status}}
	I1019 23:17:34.224633    7734 addons.go:65] Setting olm=true in profile "addons-20211019231643-6808"
	I1019 23:17:34.225912    7734 addons.go:153] Setting addon olm=true in "addons-20211019231643-6808"
	I1019 23:17:34.225931    7734 host.go:66] Checking if "addons-20211019231643-6808" exists ...
	I1019 23:17:34.226174    7734 cli_runner.go:115] Run: docker container inspect addons-20211019231643-6808 --format={{.State.Status}}
	I1019 23:17:34.226240    7734 cli_runner.go:115] Run: docker container inspect addons-20211019231643-6808 --format={{.State.Status}}
	I1019 23:17:34.226361    7734 cli_runner.go:115] Run: docker container inspect addons-20211019231643-6808 --format={{.State.Status}}
	I1019 23:17:34.343842    7734 out.go:176]   - Using image ghcr.io/helm/tiller:v2.17.0
	I1019 23:17:34.343966    7734 addons.go:337] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I1019 23:17:34.343976    7734 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I1019 23:17:34.344025    7734 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211019231643-6808
	I1019 23:17:34.347564    7734 out.go:176]   - Using image k8s.gcr.io/ingress-nginx/controller:v1.0.0-beta.3
	I1019 23:17:34.347301    7734 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.22.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1019 23:17:34.350635    7734 addons.go:153] Setting addon default-storageclass=true in "addons-20211019231643-6808"
	W1019 23:17:34.350654    7734 addons.go:165] addon default-storageclass should already be in state true
	I1019 23:17:34.350682    7734 host.go:66] Checking if "addons-20211019231643-6808" exists ...
	I1019 23:17:34.351320    7734 cli_runner.go:115] Run: docker container inspect addons-20211019231643-6808 --format={{.State.Status}}
	I1019 23:17:34.352790    7734 out.go:176]   - Using image k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.0
	I1019 23:17:34.359855    7734 out.go:176]   - Using image k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.0
	I1019 23:17:34.353125    7734 node_ready.go:35] waiting up to 6m0s for node "addons-20211019231643-6808" to be "Ready" ...
	I1019 23:17:34.360271    7734 addons.go:337] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1019 23:17:34.360287    7734 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (17265 bytes)
	I1019 23:17:34.360336    7734 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211019231643-6808
	I1019 23:17:34.362573    7734 out.go:176]   - Using image k8s.gcr.io/sig-storage/snapshot-controller:v4.0.0
	I1019 23:17:34.362662    7734 addons.go:337] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1019 23:17:34.362678    7734 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1019 23:17:34.362738    7734 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211019231643-6808
	I1019 23:17:34.369988    7734 out.go:176]   - Using image k8s.gcr.io/metrics-server/metrics-server:v0.4.2
	I1019 23:17:34.370068    7734 addons.go:337] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1019 23:17:34.370082    7734 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I1019 23:17:34.370137    7734 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211019231643-6808
	I1019 23:17:34.390955    7734 out.go:176]   - Using image registry:2.7.1
	I1019 23:17:34.389750    7734 out.go:176]   - Using image k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0
	I1019 23:17:34.392409    7734 out.go:176]   - Using image gcr.io/google_containers/kube-registry-proxy:0.4
	I1019 23:17:34.392454    7734 out.go:176]   - Using image quay.io/operator-framework/olm
	I1019 23:17:34.393872    7734 out.go:176]   - Using image quay.io/operatorhubio/catalog:latest
	I1019 23:17:34.392538    7734 addons.go:337] installing /etc/kubernetes/addons/registry-rc.yaml
	I1019 23:17:34.394003    7734 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (788 bytes)
	I1019 23:17:34.396275    7734 out.go:176]   - Using image k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0
	I1019 23:17:34.394068    7734 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211019231643-6808
	I1019 23:17:34.401067    7734 out.go:176]   - Using image k8s.gcr.io/sig-storage/csi-attacher:v3.1.0
	I1019 23:17:34.402440    7734 out.go:176]   - Using image k8s.gcr.io/sig-storage/hostpathplugin:v1.6.0
	I1019 23:17:34.401296    7734 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 23:17:34.407034    7734 out.go:176]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.1
	I1019 23:17:34.407124    7734 addons.go:337] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1019 23:17:34.407134    7734 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1019 23:17:34.407189    7734 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211019231643-6808
	I1019 23:17:34.404290    7734 out.go:176]   - Using image k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1
	I1019 23:17:34.404402    7734 addons.go:337] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 23:17:34.407398    7734 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 23:17:34.408757    7734 out.go:176]   - Using image k8s.gcr.io/sig-storage/csi-resizer:v1.1.0
	I1019 23:17:34.407444    7734 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211019231643-6808
	I1019 23:17:34.410076    7734 out.go:176]   - Using image k8s.gcr.io/sig-storage/livenessprobe:v2.2.0
	I1019 23:17:34.411322    7734 out.go:176]   - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-agent:v0.2.0
	I1019 23:17:34.412726    7734 out.go:176]   - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-controller:v0.2.0
	I1019 23:17:34.412802    7734 addons.go:337] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1019 23:17:34.412812    7734 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1019 23:17:34.412868    7734 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211019231643-6808
	I1019 23:17:34.456626    7734 addons.go:337] installing /etc/kubernetes/addons/crds.yaml
	I1019 23:17:34.462595    7734 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/crds.yaml (636901 bytes)
	I1019 23:17:34.462726    7734 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211019231643-6808
	I1019 23:17:34.473289    7734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/addons-20211019231643-6808/id_rsa Username:docker}
	I1019 23:17:34.475930    7734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/addons-20211019231643-6808/id_rsa Username:docker}
	I1019 23:17:34.475972    7734 addons.go:337] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 23:17:34.476000    7734 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 23:17:34.476053    7734 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211019231643-6808
	I1019 23:17:34.496527    7734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/addons-20211019231643-6808/id_rsa Username:docker}
	I1019 23:17:34.496518    7734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/addons-20211019231643-6808/id_rsa Username:docker}
	I1019 23:17:34.499175    7734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/addons-20211019231643-6808/id_rsa Username:docker}
	I1019 23:17:34.500978    7734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/addons-20211019231643-6808/id_rsa Username:docker}
	I1019 23:17:34.504445    7734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/addons-20211019231643-6808/id_rsa Username:docker}
	I1019 23:17:34.513681    7734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/addons-20211019231643-6808/id_rsa Username:docker}
	I1019 23:17:34.538489    7734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/addons-20211019231643-6808/id_rsa Username:docker}
	I1019 23:17:34.557738    7734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/addons-20211019231643-6808/id_rsa Username:docker}
	I1019 23:17:34.640617    7734 addons.go:337] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1019 23:17:34.640643    7734 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1931 bytes)
	I1019 23:17:34.734146    7734 addons.go:337] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I1019 23:17:34.734219    7734 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I1019 23:17:34.734621    7734 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 23:17:34.737250    7734 addons.go:337] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1019 23:17:34.737269    7734 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I1019 23:17:34.737484    7734 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1019 23:17:34.831609    7734 addons.go:337] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1019 23:17:34.831634    7734 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I1019 23:17:34.832588    7734 addons.go:337] installing /etc/kubernetes/addons/registry-svc.yaml
	I1019 23:17:34.832609    7734 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1019 23:17:34.913632    7734 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 23:17:34.913781    7734 addons.go:337] installing /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml
	I1019 23:17:34.913911    7734 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml (2203 bytes)
	I1019 23:17:34.916792    7734 addons.go:337] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1019 23:17:34.916817    7734 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I1019 23:17:34.925064    7734 addons.go:337] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1019 23:17:34.925087    7734 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1019 23:17:34.931443    7734 addons.go:337] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1019 23:17:34.931465    7734 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (950 bytes)
	I1019 23:17:34.933523    7734 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1019 23:17:34.934742    7734 start.go:729] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I1019 23:17:35.016425    7734 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1019 23:17:35.019384    7734 addons.go:337] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1019 23:17:35.019407    7734 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1019 23:17:35.029152    7734 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1019 23:17:35.032138    7734 addons.go:337] installing /etc/kubernetes/addons/olm.yaml
	I1019 23:17:35.032159    7734 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/olm.yaml (9929 bytes)
	I1019 23:17:35.033310    7734 addons.go:337] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1019 23:17:35.033328    7734 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3037 bytes)
	I1019 23:17:35.123929    7734 addons.go:337] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1019 23:17:35.123953    7734 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19584 bytes)
	I1019 23:17:35.129549    7734 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1019 23:17:35.134238    7734 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.2/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml
	I1019 23:17:35.231392    7734 addons.go:337] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1019 23:17:35.231428    7734 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (3666 bytes)
	I1019 23:17:35.326408    7734 addons.go:337] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1019 23:17:35.326435    7734 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2944 bytes)
	I1019 23:17:35.333291    7734 addons.go:337] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1019 23:17:35.333318    7734 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3428 bytes)
	I1019 23:17:35.528942    7734 addons.go:337] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1019 23:17:35.528973    7734 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3194 bytes)
	I1019 23:17:35.631731    7734 addons.go:337] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1019 23:17:35.631812    7734 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1071 bytes)
	I1019 23:17:35.719044    7734 addons.go:337] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1019 23:17:35.719075    7734 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2421 bytes)
	I1019 23:17:35.918321    7734 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1019 23:17:36.019051    7734 addons.go:337] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1019 23:17:36.019118    7734 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1034 bytes)
	I1019 23:17:36.218901    7734 addons.go:337] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1019 23:17:36.218931    7734 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (6710 bytes)
	I1019 23:17:36.328425    7734 addons.go:337] installing /etc/kubernetes/addons/csi-hostpath-provisioner.yaml
	I1019 23:17:36.328454    7734 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/csi-hostpath-provisioner.yaml (2555 bytes)
	I1019 23:17:36.423573    7734 node_ready.go:58] node "addons-20211019231643-6808" has status "Ready":"False"
	I1019 23:17:36.428341    7734 addons.go:337] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1019 23:17:36.428362    7734 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2469 bytes)
	I1019 23:17:36.530495    7734 addons.go:337] installing /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml
	I1019 23:17:36.530523    7734 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml (2555 bytes)
	I1019 23:17:36.532410    7734 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.797762439s)
	I1019 23:17:36.715442    7734 addons.go:337] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1019 23:17:36.715473    7734 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1019 23:17:36.917188    7734 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1019 23:17:37.513584    7734 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.599755056s)
	I1019 23:17:37.513630    7734 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (2.776128557s)
	I1019 23:17:37.513642    7734 addons.go:375] Verifying addon ingress=true in "addons-20211019231643-6808"
	I1019 23:17:37.518478    7734 out.go:176] * Verifying ingress addon...
	I1019 23:17:37.520710    7734 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1019 23:17:37.513870    7734 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.580287718s)
	I1019 23:17:37.513944    7734 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.497435496s)
	I1019 23:17:37.520944    7734 addons.go:375] Verifying addon metrics-server=true in "addons-20211019231643-6808"
	I1019 23:17:37.514048    7734 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (2.484865527s)
	I1019 23:17:37.514108    7734 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (2.384528094s)
	I1019 23:17:37.520981    7734 addons.go:375] Verifying addon registry=true in "addons-20211019231643-6808"
	I1019 23:17:37.523941    7734 out.go:176] * Verifying registry addon...
	I1019 23:17:37.526021    7734 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1019 23:17:37.537363    7734 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1019 23:17:37.537383    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:17:37.538060    7734 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1019 23:17:37.538079    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:17:38.126195    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:17:38.126564    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:17:38.624392    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:17:38.624765    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:17:38.918390    7734 node_ready.go:58] node "addons-20211019231643-6808" has status "Ready":"False"
	I1019 23:17:39.125110    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:17:39.125370    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:17:39.318927    7734 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.2/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: (4.18464637s)
	W1019 23:17:39.318972    7734 addons.go:358] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.2/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operatorconditions.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
	namespace/olm created
	namespace/operators created
	serviceaccount/olm-operator-serviceaccount created
	clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
	clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
	deployment.apps/olm-operator created
	deployment.apps/catalog-operator created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "ClusterServiceVersion" in version "operators.coreos.com/v1alpha1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"
	I1019 23:17:39.318994    7734 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.2/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operatorconditions.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
	namespace/olm created
	namespace/operators created
	serviceaccount/olm-operator-serviceaccount created
	clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
	clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
	deployment.apps/olm-operator created
	deployment.apps/catalog-operator created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "ClusterServiceVersion" in version "operators.coreos.com/v1alpha1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"
	I1019 23:17:39.319057    7734 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.400647232s)
	W1019 23:17:39.319101    7734 addons.go:358] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: unable to recognize "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	I1019 23:17:39.319113    7734 retry.go:31] will retry after 360.127272ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: unable to recognize "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	I1019 23:17:39.595283    7734 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.2/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml
	I1019 23:17:39.614157    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:17:39.634384    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:17:39.679670    7734 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1019 23:17:40.114274    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:17:40.115845    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:17:40.117628    7734 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.200376633s)
	I1019 23:17:40.117658    7734 addons.go:375] Verifying addon csi-hostpath-driver=true in "addons-20211019231643-6808"
	I1019 23:17:40.119788    7734 out.go:176] * Verifying csi-hostpath-driver addon...
	I1019 23:17:40.121969    7734 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1019 23:17:40.126076    7734 kapi.go:86] Found 5 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1019 23:17:40.126105    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:17:40.542465    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:17:40.545261    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:17:40.630806    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:17:41.115643    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:17:41.116166    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:17:41.130177    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:17:41.368306    7734 node_ready.go:58] node "addons-20211019231643-6808" has status "Ready":"False"
	I1019 23:17:41.457472    7734 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.2/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: (1.862144016s)
	I1019 23:17:41.457557    7734 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.777843424s)
	I1019 23:17:41.541664    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:17:41.541800    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:17:41.640404    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:17:42.041281    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:17:42.041663    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:17:42.130673    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:17:42.541187    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:17:42.541755    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:17:42.630302    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:17:43.115266    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:17:43.117021    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:17:43.130400    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:17:43.368601    7734 node_ready.go:58] node "addons-20211019231643-6808" has status "Ready":"False"
	I1019 23:17:43.541062    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:17:43.541549    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:17:43.630232    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:17:44.041521    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:17:44.041819    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:17:44.131085    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:17:44.541016    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:17:44.541142    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:17:44.631034    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:17:45.041107    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:17:45.041214    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:17:45.131559    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:17:45.541481    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:17:45.541570    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:17:45.630453    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:17:45.868385    7734 node_ready.go:58] node "addons-20211019231643-6808" has status "Ready":"False"
	I1019 23:17:46.041482    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:17:46.041573    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:17:46.130334    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:17:46.541361    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:17:46.541466    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:17:46.630848    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:17:47.041284    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:17:47.041598    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:17:47.130044    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:17:47.540987    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:17:47.541383    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:17:47.630664    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:17:47.868656    7734 node_ready.go:58] node "addons-20211019231643-6808" has status "Ready":"False"
	I1019 23:17:48.040766    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:17:48.040987    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:17:48.130690    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:17:48.540758    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:17:48.541196    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:17:48.631001    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:17:49.041456    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:17:49.041605    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:17:49.130405    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:17:49.541388    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:17:49.541572    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:17:49.630675    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:17:50.040928    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:17:50.041244    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:17:50.130802    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:17:50.367589    7734 node_ready.go:58] node "addons-20211019231643-6808" has status "Ready":"False"
	I1019 23:17:50.540846    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:17:50.541063    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:17:50.630934    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:17:51.041154    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:17:51.041327    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:17:51.131040    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:17:51.541279    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:17:51.541305    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:17:51.630066    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:17:52.041341    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:17:52.041446    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:17:52.130220    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:17:52.368193    7734 node_ready.go:58] node "addons-20211019231643-6808" has status "Ready":"False"
	I1019 23:17:52.541379    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:17:52.541442    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:17:52.630349    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:17:53.041451    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:17:53.041466    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:17:53.130107    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:17:53.541553    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:17:53.541660    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:17:53.630439    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:17:54.041292    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:17:54.041503    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:17:54.130110    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:17:54.368225    7734 node_ready.go:58] node "addons-20211019231643-6808" has status "Ready":"False"
	I1019 23:17:54.541357    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:17:54.541651    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:17:54.630560    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:17:55.041219    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:17:55.041385    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:17:55.131161    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:17:55.541282    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:17:55.541342    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:17:55.630364    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:17:56.041510    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:17:56.041519    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:17:56.131073    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:17:56.540864    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:17:56.540908    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:17:56.630797    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:17:56.867566    7734 node_ready.go:58] node "addons-20211019231643-6808" has status "Ready":"False"
	I1019 23:17:57.040888    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:17:57.041176    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:17:57.130632    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:17:57.541044    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:17:57.541428    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:17:57.629915    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:17:58.041456    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:17:58.041522    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:17:58.132534    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:17:58.541043    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:17:58.541250    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:17:58.630899    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:17:58.868206    7734 node_ready.go:58] node "addons-20211019231643-6808" has status "Ready":"False"
	I1019 23:17:59.041367    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:17:59.041370    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:17:59.130297    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:17:59.540990    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:17:59.541308    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:17:59.630970    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:00.041610    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:00.041675    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:18:00.130916    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:00.541085    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:18:00.541435    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:00.630036    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:01.041373    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:18:01.041454    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:01.130338    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:01.368420    7734 node_ready.go:58] node "addons-20211019231643-6808" has status "Ready":"False"
	I1019 23:18:01.541603    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:18:01.541826    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:01.630947    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:02.041370    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:02.041642    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:18:02.130574    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:02.540768    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:18:02.540896    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:02.630427    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:03.041882    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:18:03.041926    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:03.130407    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:03.368462    7734 node_ready.go:58] node "addons-20211019231643-6808" has status "Ready":"False"
	I1019 23:18:03.540699    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:18:03.541051    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:03.631996    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:04.040869    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:18:04.041169    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:04.130940    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:04.540956    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:18:04.540970    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:04.630912    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:05.041161    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:18:05.041450    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:05.132132    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:05.541052    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:18:05.541212    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:05.630993    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:05.868045    7734 node_ready.go:58] node "addons-20211019231643-6808" has status "Ready":"False"
	I1019 23:18:06.041119    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:18:06.041227    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:06.130981    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:06.541061    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:06.541099    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:18:06.631263    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:07.041441    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:07.041587    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:18:07.130411    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:07.541311    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:18:07.541390    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:07.630460    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:07.868232    7734 node_ready.go:58] node "addons-20211019231643-6808" has status "Ready":"False"
	I1019 23:18:08.041570    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:18:08.041675    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:08.130561    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:08.540905    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:18:08.541150    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:08.630773    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:09.041474    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:18:09.041513    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:09.132446    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:09.541228    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:18:09.541349    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:09.631078    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:10.041272    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:18:10.041349    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:10.130149    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:10.368018    7734 node_ready.go:58] node "addons-20211019231643-6808" has status "Ready":"False"
	I1019 23:18:10.541245    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:18:10.541368    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:10.630025    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:11.040688    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:11.040959    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:18:11.130564    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:11.540884    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:18:11.541019    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:11.630517    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:12.041588    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:12.041596    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:18:12.130317    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:12.368080    7734 node_ready.go:58] node "addons-20211019231643-6808" has status "Ready":"False"
	I1019 23:18:12.541249    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:12.541359    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:18:12.632021    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:12.867642    7734 node_ready.go:49] node "addons-20211019231643-6808" has status "Ready":"True"
	I1019 23:18:12.867667    7734 node_ready.go:38] duration metric: took 38.507613207s waiting for node "addons-20211019231643-6808" to be "Ready" ...
	I1019 23:18:12.867675    7734 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1019 23:18:12.873662    7734 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-rjl2m" in "kube-system" namespace to be "Ready" ...
	I1019 23:18:13.041221    7734 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1019 23:18:13.041242    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:18:13.041443    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:13.130096    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:13.541388    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:13.542003    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:18:13.631779    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:14.041488    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:14.041860    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:18:14.131450    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:14.541514    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:14.542259    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:18:14.634329    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:14.916840    7734 pod_ready.go:102] pod "coredns-78fcd69978-rjl2m" in "kube-system" namespace has status "Ready":"False"
	I1019 23:18:15.041357    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:18:15.042197    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:15.131250    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:15.541345    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:15.542060    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:18:15.631120    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:16.042094    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:16.042215    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:18:16.132475    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:16.542123    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:16.542513    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:18:16.631723    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:17.041237    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:17.041822    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:18:17.132191    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:17.386953    7734 pod_ready.go:102] pod "coredns-78fcd69978-rjl2m" in "kube-system" namespace has status "Ready":"False"
	I1019 23:18:17.541637    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:17.541731    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:18:17.630956    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:18.046987    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:18.047084    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:18:18.131496    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:18.541586    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:18.541726    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:18:18.635637    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:19.041825    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:19.042406    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:18:19.130779    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:19.541710    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:19.542297    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:18:19.631571    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:19.886911    7734 pod_ready.go:102] pod "coredns-78fcd69978-rjl2m" in "kube-system" namespace has status "Ready":"False"
	I1019 23:18:20.041939    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:18:20.042361    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:20.133185    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:20.541822    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:18:20.541910    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:20.632269    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:21.041488    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:18:21.041570    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:21.131222    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:21.541777    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:18:21.541828    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:21.630570    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:21.915029    7734 pod_ready.go:102] pod "coredns-78fcd69978-rjl2m" in "kube-system" namespace has status "Ready":"False"
	I1019 23:18:22.041499    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:22.041824    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:18:22.131948    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:22.541290    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:18:22.541352    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:22.630887    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:23.041520    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:23.041706    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:18:23.131136    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:23.541206    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:23.541468    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:18:23.631640    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:24.041146    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:24.041467    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:18:24.130795    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:24.387008    7734 pod_ready.go:92] pod "coredns-78fcd69978-rjl2m" in "kube-system" namespace has status "Ready":"True"
	I1019 23:18:24.387030    7734 pod_ready.go:81] duration metric: took 11.513346539s waiting for pod "coredns-78fcd69978-rjl2m" in "kube-system" namespace to be "Ready" ...
	I1019 23:18:24.387050    7734 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-20211019231643-6808" in "kube-system" namespace to be "Ready" ...
	I1019 23:18:24.390852    7734 pod_ready.go:92] pod "etcd-addons-20211019231643-6808" in "kube-system" namespace has status "Ready":"True"
	I1019 23:18:24.390872    7734 pod_ready.go:81] duration metric: took 3.815644ms waiting for pod "etcd-addons-20211019231643-6808" in "kube-system" namespace to be "Ready" ...
	I1019 23:18:24.390887    7734 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-20211019231643-6808" in "kube-system" namespace to be "Ready" ...
	I1019 23:18:24.394717    7734 pod_ready.go:92] pod "kube-apiserver-addons-20211019231643-6808" in "kube-system" namespace has status "Ready":"True"
	I1019 23:18:24.394732    7734 pod_ready.go:81] duration metric: took 3.833413ms waiting for pod "kube-apiserver-addons-20211019231643-6808" in "kube-system" namespace to be "Ready" ...
	I1019 23:18:24.394740    7734 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-20211019231643-6808" in "kube-system" namespace to be "Ready" ...
	I1019 23:18:24.398251    7734 pod_ready.go:92] pod "kube-controller-manager-addons-20211019231643-6808" in "kube-system" namespace has status "Ready":"True"
	I1019 23:18:24.398264    7734 pod_ready.go:81] duration metric: took 3.519112ms waiting for pod "kube-controller-manager-addons-20211019231643-6808" in "kube-system" namespace to be "Ready" ...
	I1019 23:18:24.398283    7734 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b5cg9" in "kube-system" namespace to be "Ready" ...
	I1019 23:18:24.401752    7734 pod_ready.go:92] pod "kube-proxy-b5cg9" in "kube-system" namespace has status "Ready":"True"
	I1019 23:18:24.401770    7734 pod_ready.go:81] duration metric: took 3.476834ms waiting for pod "kube-proxy-b5cg9" in "kube-system" namespace to be "Ready" ...
	I1019 23:18:24.401777    7734 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-20211019231643-6808" in "kube-system" namespace to be "Ready" ...
	I1019 23:18:24.541858    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:18:24.541886    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:24.631583    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:24.784909    7734 pod_ready.go:92] pod "kube-scheduler-addons-20211019231643-6808" in "kube-system" namespace has status "Ready":"True"
	I1019 23:18:24.784926    7734 pod_ready.go:81] duration metric: took 383.144219ms waiting for pod "kube-scheduler-addons-20211019231643-6808" in "kube-system" namespace to be "Ready" ...
	I1019 23:18:24.784936    7734 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-77c99ccb96-m99jq" in "kube-system" namespace to be "Ready" ...
	I1019 23:18:25.041669    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:18:25.041975    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:25.130846    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:25.185441    7734 pod_ready.go:92] pod "metrics-server-77c99ccb96-m99jq" in "kube-system" namespace has status "Ready":"True"
	I1019 23:18:25.185461    7734 pod_ready.go:81] duration metric: took 400.519396ms waiting for pod "metrics-server-77c99ccb96-m99jq" in "kube-system" namespace to be "Ready" ...
	I1019 23:18:25.185479    7734 pod_ready.go:38] duration metric: took 12.317791241s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1019 23:18:25.185492    7734 api_server.go:51] waiting for apiserver process to appear ...
	I1019 23:18:25.185529    7734 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 23:18:25.220028    7734 api_server.go:71] duration metric: took 50.997719614s to wait for apiserver process to appear ...
	I1019 23:18:25.220055    7734 api_server.go:87] waiting for apiserver healthz status ...
	I1019 23:18:25.220066    7734 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1019 23:18:25.230743    7734 api_server.go:266] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1019 23:18:25.231700    7734 api_server.go:140] control plane version: v1.22.2
	I1019 23:18:25.231718    7734 api_server.go:130] duration metric: took 11.658165ms to wait for apiserver health ...
	I1019 23:18:25.231726    7734 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 23:18:25.393358    7734 system_pods.go:59] 20 kube-system pods found
	I1019 23:18:25.393397    7734 system_pods.go:61] "coredns-78fcd69978-rjl2m" [3c25a7ca-66e7-46ef-b81b-c6c6b7687049] Running
	I1019 23:18:25.393409    7734 system_pods.go:61] "csi-hostpath-attacher-0" [a874b4a1-a5c7-4d7f-8053-2e836a4c8c08] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1019 23:18:25.393418    7734 system_pods.go:61] "csi-hostpath-provisioner-0" [f31cc511-a338-4553-a05f-3bdb39d76979] Pending / Ready:ContainersNotReady (containers with unready status: [csi-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-provisioner])
	I1019 23:18:25.393428    7734 system_pods.go:61] "csi-hostpath-resizer-0" [f6082318-8b9b-4f91-8bd9-35a5925e914d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1019 23:18:25.393463    7734 system_pods.go:61] "csi-hostpath-snapshotter-0" [932d6672-dc82-414a-bf99-3eba9d8acae4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-snapshotter])
	I1019 23:18:25.393474    7734 system_pods.go:61] "csi-hostpathplugin-0" [297373ed-17b6-4ad8-9d71-9fbadcb20966] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-agent csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-agent csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe])
	I1019 23:18:25.393486    7734 system_pods.go:61] "etcd-addons-20211019231643-6808" [92d62341-002b-4f1e-81d3-3a350f4a300d] Running
	I1019 23:18:25.393495    7734 system_pods.go:61] "kindnet-sx4n5" [6fa9d1ba-32eb-45af-87d4-c384493fc083] Running
	I1019 23:18:25.393505    7734 system_pods.go:61] "kube-apiserver-addons-20211019231643-6808" [7ca3cd7e-e6fa-449c-b409-a2a063fa254a] Running
	I1019 23:18:25.393514    7734 system_pods.go:61] "kube-controller-manager-addons-20211019231643-6808" [7743a272-cd12-45ab-a4d4-2da03d0d098e] Running
	I1019 23:18:25.393527    7734 system_pods.go:61] "kube-ingress-dns-minikube" [0fb38f91-ce47-4d8b-8688-d829ec076cc0] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1019 23:18:25.393536    7734 system_pods.go:61] "kube-proxy-b5cg9" [9b074606-1864-4f0c-8898-d3d19fee2693] Running
	I1019 23:18:25.393546    7734 system_pods.go:61] "kube-scheduler-addons-20211019231643-6808" [ddf91e50-c3b5-45b3-a8ba-b68b50e0fdb0] Running
	I1019 23:18:25.393557    7734 system_pods.go:61] "metrics-server-77c99ccb96-m99jq" [be096bdf-c8da-4b3d-81ea-648956a0dfa9] Running
	I1019 23:18:25.393568    7734 system_pods.go:61] "registry-bsxpn" [a1f96410-ffd9-4aa0-b501-2936a051c7d4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1019 23:18:25.393578    7734 system_pods.go:61] "registry-proxy-9ddbd" [e75fbf66-a879-4a83-b118-2479b2b42449] Running
	I1019 23:18:25.393591    7734 system_pods.go:61] "snapshot-controller-989f9ddc8-djdlb" [4494f652-bf37-4b2a-a2d2-815bc1934fc3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 23:18:25.393602    7734 system_pods.go:61] "snapshot-controller-989f9ddc8-rlx5q" [65398c9c-002b-42f9-be8f-cf048324ea4c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 23:18:25.393615    7734 system_pods.go:61] "storage-provisioner" [0f2b551a-6315-41ab-b35b-45f6ec28031d] Running
	I1019 23:18:25.393624    7734 system_pods.go:61] "tiller-deploy-64b546c44b-qgh9p" [19d16b94-c86b-44ed-9b95-f9ecf3979406] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1019 23:18:25.393634    7734 system_pods.go:74] duration metric: took 161.903203ms to wait for pod list to return data ...
	I1019 23:18:25.393646    7734 default_sa.go:34] waiting for default service account to be created ...
	I1019 23:18:25.543090    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:18:25.543195    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:25.585614    7734 default_sa.go:45] found service account: "default"
	I1019 23:18:25.585643    7734 default_sa.go:55] duration metric: took 191.988309ms for default service account to be created ...
	I1019 23:18:25.585653    7734 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 23:18:25.630888    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:25.792951    7734 system_pods.go:86] 20 kube-system pods found
	I1019 23:18:25.792977    7734 system_pods.go:89] "coredns-78fcd69978-rjl2m" [3c25a7ca-66e7-46ef-b81b-c6c6b7687049] Running
	I1019 23:18:25.792988    7734 system_pods.go:89] "csi-hostpath-attacher-0" [a874b4a1-a5c7-4d7f-8053-2e836a4c8c08] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1019 23:18:25.792995    7734 system_pods.go:89] "csi-hostpath-provisioner-0" [f31cc511-a338-4553-a05f-3bdb39d76979] Pending / Ready:ContainersNotReady (containers with unready status: [csi-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-provisioner])
	I1019 23:18:25.793000    7734 system_pods.go:89] "csi-hostpath-resizer-0" [f6082318-8b9b-4f91-8bd9-35a5925e914d] Running
	I1019 23:18:25.793006    7734 system_pods.go:89] "csi-hostpath-snapshotter-0" [932d6672-dc82-414a-bf99-3eba9d8acae4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-snapshotter])
	I1019 23:18:25.793012    7734 system_pods.go:89] "csi-hostpathplugin-0" [297373ed-17b6-4ad8-9d71-9fbadcb20966] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-agent csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-agent csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe])
	I1019 23:18:25.793017    7734 system_pods.go:89] "etcd-addons-20211019231643-6808" [92d62341-002b-4f1e-81d3-3a350f4a300d] Running
	I1019 23:18:25.793021    7734 system_pods.go:89] "kindnet-sx4n5" [6fa9d1ba-32eb-45af-87d4-c384493fc083] Running
	I1019 23:18:25.793026    7734 system_pods.go:89] "kube-apiserver-addons-20211019231643-6808" [7ca3cd7e-e6fa-449c-b409-a2a063fa254a] Running
	I1019 23:18:25.793031    7734 system_pods.go:89] "kube-controller-manager-addons-20211019231643-6808" [7743a272-cd12-45ab-a4d4-2da03d0d098e] Running
	I1019 23:18:25.793038    7734 system_pods.go:89] "kube-ingress-dns-minikube" [0fb38f91-ce47-4d8b-8688-d829ec076cc0] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1019 23:18:25.793042    7734 system_pods.go:89] "kube-proxy-b5cg9" [9b074606-1864-4f0c-8898-d3d19fee2693] Running
	I1019 23:18:25.793046    7734 system_pods.go:89] "kube-scheduler-addons-20211019231643-6808" [ddf91e50-c3b5-45b3-a8ba-b68b50e0fdb0] Running
	I1019 23:18:25.793050    7734 system_pods.go:89] "metrics-server-77c99ccb96-m99jq" [be096bdf-c8da-4b3d-81ea-648956a0dfa9] Running
	I1019 23:18:25.793056    7734 system_pods.go:89] "registry-bsxpn" [a1f96410-ffd9-4aa0-b501-2936a051c7d4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1019 23:18:25.793063    7734 system_pods.go:89] "registry-proxy-9ddbd" [e75fbf66-a879-4a83-b118-2479b2b42449] Running
	I1019 23:18:25.793069    7734 system_pods.go:89] "snapshot-controller-989f9ddc8-djdlb" [4494f652-bf37-4b2a-a2d2-815bc1934fc3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 23:18:25.793080    7734 system_pods.go:89] "snapshot-controller-989f9ddc8-rlx5q" [65398c9c-002b-42f9-be8f-cf048324ea4c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 23:18:25.793087    7734 system_pods.go:89] "storage-provisioner" [0f2b551a-6315-41ab-b35b-45f6ec28031d] Running
	I1019 23:18:25.793092    7734 system_pods.go:89] "tiller-deploy-64b546c44b-qgh9p" [19d16b94-c86b-44ed-9b95-f9ecf3979406] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1019 23:18:25.793100    7734 system_pods.go:126] duration metric: took 207.443201ms to wait for k8s-apps to be running ...
	I1019 23:18:25.793106    7734 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 23:18:25.793142    7734 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
	I1019 23:18:25.808882    7734 system_svc.go:56] duration metric: took 15.764866ms WaitForService to wait for kubelet.
	I1019 23:18:25.808926    7734 kubeadm.go:547] duration metric: took 51.586620251s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1019 23:18:25.808969    7734 node_conditions.go:102] verifying NodePressure condition ...
	I1019 23:18:25.986598    7734 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I1019 23:18:25.986626    7734 node_conditions.go:123] node cpu capacity is 8
	I1019 23:18:25.986638    7734 node_conditions.go:105] duration metric: took 177.664527ms to run NodePressure ...
	I1019 23:18:25.986648    7734 start.go:231] waiting for startup goroutines ...
	I1019 23:18:26.041538    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:18:26.041701    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:26.132088    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:26.542244    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:26.542261    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 23:18:26.632054    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:27.042185    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:27.042327    7734 kapi.go:108] duration metric: took 49.516302606s to wait for kubernetes.io/minikube-addons=registry ...
	I1019 23:18:27.132014    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:27.541965    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:27.631874    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:28.042768    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:28.141351    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:28.545117    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:28.631123    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:29.042657    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:29.132000    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:29.541294    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:29.631479    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:30.041629    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:30.130730    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:30.541659    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:30.631081    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:31.041783    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:31.131255    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:31.542539    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:31.632317    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:32.042347    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:32.132095    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:32.542194    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:32.635438    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:33.041797    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:33.130484    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:33.542106    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:33.631263    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:34.050502    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:34.136656    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:34.541325    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:34.633478    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:35.117525    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:35.133808    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:35.541691    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:35.715951    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:36.042697    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:36.130302    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:36.542097    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:36.631805    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:37.042423    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:37.131948    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:37.541531    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:37.631847    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:38.041521    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:38.132021    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:38.541086    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:38.631029    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:39.041874    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:39.131008    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:39.540969    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:39.632940    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:40.041771    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:40.132218    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:40.541471    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:40.631206    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:41.041815    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:41.131230    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:41.541361    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:41.631511    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:42.042135    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:42.130986    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:42.627570    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:42.632543    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:43.041992    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:43.130969    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:43.541465    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:43.632361    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:44.041611    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:44.133315    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:44.541404    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:44.630948    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:45.041636    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:45.130848    7734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 23:18:45.542176    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:45.634031    7734 kapi.go:108] duration metric: took 1m5.512058507s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1019 23:18:46.042527    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:46.541271    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:47.041684    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:47.541780    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:48.041879    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:48.541732    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:49.545916    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:50.041788    7734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 23:18:50.541830    7734 kapi.go:108] duration metric: took 1m13.021118823s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1019 23:18:50.543470    7734 out.go:176] * Enabled addons: storage-provisioner, default-storageclass, ingress-dns, metrics-server, helm-tiller, olm, volumesnapshots, registry, csi-hostpath-driver, ingress
	I1019 23:18:50.543509    7734 addons.go:406] enableAddons completed in 1m16.321161577s
	I1019 23:18:50.593578    7734 start.go:462] kubectl: 1.20.5, cluster: 1.22.2 (minor skew: 2)
	I1019 23:18:50.595253    7734 out.go:176] 
	W1019 23:18:50.595427    7734 out.go:241] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.2.
	I1019 23:18:50.596950    7734 out.go:176]   - Want kubectl v1.22.2? Try 'minikube kubectl -- get pods -A'
	I1019 23:18:50.598752    7734 out.go:176] * Done! kubectl is now configured to use "addons-20211019231643-6808" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	2b78c4bae6115       56cc512116c8f       8 minutes ago       Running             busybox                   0                   3e1a4de706bf3
	47d1e33100b23       57af51f0aa793       8 minutes ago       Running             registration-operator     0                   67ade9a269eb5
	b2e9ad13a2b3d       57af51f0aa793       8 minutes ago       Running             registration-operator     0                   d78822cd7badc
	ee7fd6a827b9b       57af51f0aa793       8 minutes ago       Running             registration-operator     0                   a813b6b6aceb6
	bc43eb8305c48       98b8dcda0fb41       8 minutes ago       Running             hello-world-app           0                   d0139a44b4a08
	a26d2b6bb55cf       f2dbf1cb6fb89       8 minutes ago       Exited              extract                   0                   1300a6aa04208
	89d645853ac18       d0a5ef0ccb82b       8 minutes ago       Exited              pull                      0                   1300a6aa04208
	fc53d455bd0a5       513f9a9d8748b       8 minutes ago       Running             nginx                     0                   e4707febb93bf
	9d1a788236f0e       518fd05ba6b5b       8 minutes ago       Exited              util                      0                   1300a6aa04208
	0fad6f148e65d       e8197c9bc36bb       9 minutes ago       Running             gcp-auth                  0                   ab219d9a01e2d
	05a7cd86ace2b       518fd05ba6b5b       9 minutes ago       Running             packageserver             0                   38d3813f78564
	118348e3993c1       518fd05ba6b5b       9 minutes ago       Running             packageserver             0                   fd7f3af0c8876
	b36cbc5f940e7       da9bdcdbff99a       9 minutes ago       Running             registry-server           0                   197860de014f5
	cf6c21d78e0a9       518fd05ba6b5b       9 minutes ago       Running             olm-operator              0                   4b2ae157e6832
	a750f47e3bb48       518fd05ba6b5b       9 minutes ago       Running             catalog-operator          0                   7237d6b0ebd82
	5bb61fdfaef4d       6e38f40d628db       9 minutes ago       Running             storage-provisioner       0                   66d19f8e52d62
	e23bc3447baa8       8d147537fb7d1       9 minutes ago       Running             coredns                   0                   6d07185a9804e
	52050f6348a87       873127efbc8a7       10 minutes ago      Running             kube-proxy                0                   beaf590dbf9b6
	b5f75525522e6       6de166512aa22       10 minutes ago      Running             kindnet-cni               0                   eb92e36d58f8b
	04074056b1c80       b51ddc1014b04       10 minutes ago      Running             kube-scheduler            0                   8b78fdfc6c700
	8fc82401f1226       0048118155842       10 minutes ago      Running             etcd                      0                   0cb1a10e2f8b1
	0e18aa12c3e03       5425bcbd23c54       10 minutes ago      Running             kube-controller-manager   0                   adf8bac8fafc9
	065dd3359c874       e64579b7d8862       10 minutes ago      Running             kube-apiserver            0                   f623692155d0e
	
	* 
	* ==> containerd <==
	* -- Logs begin at Tue 2021-10-19 23:16:52 UTC, end at Tue 2021-10-19 23:28:06 UTC. --
	Oct 19 23:27:38 addons-20211019231643-6808 containerd[452]: time="2021-10-19T23:27:38.829861005Z" level=info msg="ExecSync for \"b36cbc5f940e769a7cc2189cda3192a715a8508545a682bd70ed2f139847862a\" returns with exit code 0"
	Oct 19 23:27:38 addons-20211019231643-6808 containerd[452]: time="2021-10-19T23:27:38.830390370Z" level=info msg="Finish piping \"stderr\" of container exec \"1877b52448b19136f1e6aab02eb054a8e13a41d06247fe5d055a82c840eb70d4\""
	Oct 19 23:27:38 addons-20211019231643-6808 containerd[452]: time="2021-10-19T23:27:38.830404126Z" level=info msg="Finish piping \"stdout\" of container exec \"1877b52448b19136f1e6aab02eb054a8e13a41d06247fe5d055a82c840eb70d4\""
	Oct 19 23:27:38 addons-20211019231643-6808 containerd[452]: time="2021-10-19T23:27:38.830474926Z" level=info msg="Exec process \"1877b52448b19136f1e6aab02eb054a8e13a41d06247fe5d055a82c840eb70d4\" exits with exit code 0 and error <nil>"
	Oct 19 23:27:38 addons-20211019231643-6808 containerd[452]: time="2021-10-19T23:27:38.831563405Z" level=info msg="ExecSync for \"b36cbc5f940e769a7cc2189cda3192a715a8508545a682bd70ed2f139847862a\" returns with exit code 0"
	Oct 19 23:27:48 addons-20211019231643-6808 containerd[452]: time="2021-10-19T23:27:48.736647374Z" level=info msg="ExecSync for \"b36cbc5f940e769a7cc2189cda3192a715a8508545a682bd70ed2f139847862a\" with command [grpc_health_probe -addr=:50051] and timeout 5 (s)"
	Oct 19 23:27:48 addons-20211019231643-6808 containerd[452]: time="2021-10-19T23:27:48.736651040Z" level=info msg="ExecSync for \"b36cbc5f940e769a7cc2189cda3192a715a8508545a682bd70ed2f139847862a\" with command [grpc_health_probe -addr=:50051] and timeout 5 (s)"
	Oct 19 23:27:48 addons-20211019231643-6808 containerd[452]: time="2021-10-19T23:27:48.802501159Z" level=info msg="Finish piping \"stderr\" of container exec \"4cc61ade48505e605f53d99762e3f84d78dd0fcf83f7926b6ffc0baa4e2cfbc8\""
	Oct 19 23:27:48 addons-20211019231643-6808 containerd[452]: time="2021-10-19T23:27:48.802500242Z" level=info msg="Finish piping \"stdout\" of container exec \"4cc61ade48505e605f53d99762e3f84d78dd0fcf83f7926b6ffc0baa4e2cfbc8\""
	Oct 19 23:27:48 addons-20211019231643-6808 containerd[452]: time="2021-10-19T23:27:48.802598585Z" level=info msg="Exec process \"4cc61ade48505e605f53d99762e3f84d78dd0fcf83f7926b6ffc0baa4e2cfbc8\" exits with exit code 0 and error <nil>"
	Oct 19 23:27:48 addons-20211019231643-6808 containerd[452]: time="2021-10-19T23:27:48.852457252Z" level=info msg="ExecSync for \"b36cbc5f940e769a7cc2189cda3192a715a8508545a682bd70ed2f139847862a\" returns with exit code 0"
	Oct 19 23:27:48 addons-20211019231643-6808 containerd[452]: time="2021-10-19T23:27:48.854491377Z" level=info msg="Finish piping \"stderr\" of container exec \"0a59420e9aa0c53556e648b82ce74850fe9dcf20b3a1c5fc37806a9554c61ead\""
	Oct 19 23:27:48 addons-20211019231643-6808 containerd[452]: time="2021-10-19T23:27:48.854562690Z" level=info msg="Finish piping \"stdout\" of container exec \"0a59420e9aa0c53556e648b82ce74850fe9dcf20b3a1c5fc37806a9554c61ead\""
	Oct 19 23:27:48 addons-20211019231643-6808 containerd[452]: time="2021-10-19T23:27:48.854622113Z" level=info msg="Exec process \"0a59420e9aa0c53556e648b82ce74850fe9dcf20b3a1c5fc37806a9554c61ead\" exits with exit code 0 and error <nil>"
	Oct 19 23:27:48 addons-20211019231643-6808 containerd[452]: time="2021-10-19T23:27:48.855702971Z" level=info msg="ExecSync for \"b36cbc5f940e769a7cc2189cda3192a715a8508545a682bd70ed2f139847862a\" returns with exit code 0"
	Oct 19 23:27:58 addons-20211019231643-6808 containerd[452]: time="2021-10-19T23:27:58.736622145Z" level=info msg="ExecSync for \"b36cbc5f940e769a7cc2189cda3192a715a8508545a682bd70ed2f139847862a\" with command [grpc_health_probe -addr=:50051] and timeout 5 (s)"
	Oct 19 23:27:58 addons-20211019231643-6808 containerd[452]: time="2021-10-19T23:27:58.736649772Z" level=info msg="ExecSync for \"b36cbc5f940e769a7cc2189cda3192a715a8508545a682bd70ed2f139847862a\" with command [grpc_health_probe -addr=:50051] and timeout 5 (s)"
	Oct 19 23:27:58 addons-20211019231643-6808 containerd[452]: time="2021-10-19T23:27:58.790426850Z" level=info msg="Finish piping \"stderr\" of container exec \"3dc673291e28b38ed6b0450d7e8cc874fb1778304fafc6a9ce4666434da1167f\""
	Oct 19 23:27:58 addons-20211019231643-6808 containerd[452]: time="2021-10-19T23:27:58.790446212Z" level=info msg="Finish piping \"stdout\" of container exec \"3dc673291e28b38ed6b0450d7e8cc874fb1778304fafc6a9ce4666434da1167f\""
	Oct 19 23:27:58 addons-20211019231643-6808 containerd[452]: time="2021-10-19T23:27:58.790503773Z" level=info msg="Exec process \"3dc673291e28b38ed6b0450d7e8cc874fb1778304fafc6a9ce4666434da1167f\" exits with exit code 0 and error <nil>"
	Oct 19 23:27:58 addons-20211019231643-6808 containerd[452]: time="2021-10-19T23:27:58.832373682Z" level=info msg="ExecSync for \"b36cbc5f940e769a7cc2189cda3192a715a8508545a682bd70ed2f139847862a\" returns with exit code 0"
	Oct 19 23:27:58 addons-20211019231643-6808 containerd[452]: time="2021-10-19T23:27:58.834546268Z" level=info msg="Finish piping \"stdout\" of container exec \"7efdcc0fe5a6e175cff0eb3834647a78ba07906d073a9b47a75974167ea66a6a\""
	Oct 19 23:27:58 addons-20211019231643-6808 containerd[452]: time="2021-10-19T23:27:58.834547378Z" level=info msg="Finish piping \"stderr\" of container exec \"7efdcc0fe5a6e175cff0eb3834647a78ba07906d073a9b47a75974167ea66a6a\""
	Oct 19 23:27:58 addons-20211019231643-6808 containerd[452]: time="2021-10-19T23:27:58.834653828Z" level=info msg="Exec process \"7efdcc0fe5a6e175cff0eb3834647a78ba07906d073a9b47a75974167ea66a6a\" exits with exit code 0 and error <nil>"
	Oct 19 23:27:58 addons-20211019231643-6808 containerd[452]: time="2021-10-19T23:27:58.835799305Z" level=info msg="ExecSync for \"b36cbc5f940e769a7cc2189cda3192a715a8508545a682bd70ed2f139847862a\" returns with exit code 0"
	
	* 
	* ==> coredns [e23bc3447baa81624b44b3943ac94302900482eda8aafebb47a156e295d5b72e] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	CoreDNS-1.8.4
	linux/amd64, go1.16.4, 053c4d5
	
	* 
	* ==> describe nodes <==
	* Name:               addons-20211019231643-6808
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-20211019231643-6808
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4e20255d8fb01d11f0b16be976bdc5b5661029f6
	                    minikube.k8s.io/name=addons-20211019231643-6808
	                    minikube.k8s.io/updated_at=2021_10_19T23_17_17_0700
	                    minikube.k8s.io/version=v1.23.2
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-20211019231643-6808
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 19 Oct 2021 23:17:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-20211019231643-6808
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 19 Oct 2021 23:28:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 19 Oct 2021 23:25:23 +0000   Tue, 19 Oct 2021 23:17:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 19 Oct 2021 23:25:23 +0000   Tue, 19 Oct 2021 23:17:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 19 Oct 2021 23:25:23 +0000   Tue, 19 Oct 2021 23:17:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 19 Oct 2021 23:25:23 +0000   Tue, 19 Oct 2021 23:18:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-20211019231643-6808
	Capacity:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	System Info:
	  Machine ID:                 bba0be70c47c400ea3cf7733f1c0b4c1
	  System UUID:                d984d8e8-385b-4199-958b-24407da91294
	  Boot ID:                    3860adfd-184a-4be3-9030-9440e8dafeac
	  Kernel Version:             4.9.0-16-amd64
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.4.9
	  Kubelet Version:            v1.22.2
	  Kube-Proxy Version:         v1.22.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (21 in total)
	  Namespace                   Name                                                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                  ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m10s
	  default                     hello-world-app-7b9bf45d65-fl72l                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m41s
	  default                     nginx                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m58s
	  default                     private-image-7ff9c8c74f-4ttlt                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m1s
	  gcp-auth                    gcp-auth-f6f59cc7c-gspd6                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m14s
	  kube-system                 coredns-78fcd69978-rjl2m                              100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     10m
	  kube-system                 etcd-addons-20211019231643-6808                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-sx4n5                                         100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      10m
	  kube-system                 kube-apiserver-addons-20211019231643-6808             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-addons-20211019231643-6808    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-b5cg9                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-addons-20211019231643-6808             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  my-etcd                     cluster-manager-794c6cc889-csvcw                      100m (1%!)(MISSING)     0 (0%!)(MISSING)      128Mi (0%!)(MISSING)       0 (0%!)(MISSING)         8m36s
	  my-etcd                     cluster-manager-794c6cc889-gmjg2                      100m (1%!)(MISSING)     0 (0%!)(MISSING)      128Mi (0%!)(MISSING)       0 (0%!)(MISSING)         8m36s
	  my-etcd                     cluster-manager-794c6cc889-snlxr                      100m (1%!)(MISSING)     0 (0%!)(MISSING)      128Mi (0%!)(MISSING)       0 (0%!)(MISSING)         8m36s
	  olm                         catalog-operator-6d578c5764-s6lts                     10m (0%!)(MISSING)      0 (0%!)(MISSING)      80Mi (0%!)(MISSING)        0 (0%!)(MISSING)         10m
	  olm                         olm-operator-5b58594fc8-h4v6q                         10m (0%!)(MISSING)      0 (0%!)(MISSING)      160Mi (0%!)(MISSING)       0 (0%!)(MISSING)         10m
	  olm                         operatorhubio-catalog-25tqk                           10m (0%!)(MISSING)      0 (0%!)(MISSING)      50Mi (0%!)(MISSING)        0 (0%!)(MISSING)         9m38s
	  olm                         packageserver-78f8f586db-78bbc                        10m (0%!)(MISSING)      0 (0%!)(MISSING)      50Mi (0%!)(MISSING)        0 (0%!)(MISSING)         9m32s
	  olm                         packageserver-78f8f586db-pxvtl                        10m (0%!)(MISSING)      0 (0%!)(MISSING)      50Mi (0%!)(MISSING)        0 (0%!)(MISSING)         9m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1200m (15%!)(MISSING)  100m (1%!)(MISSING)
	  memory             994Mi (3%!)(MISSING)   220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From     Message
	  ----    ------                   ----               ----     -------
	  Normal  NodeHasSufficientMemory  10m (x4 over 10m)  kubelet  Node addons-20211019231643-6808 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x4 over 10m)  kubelet  Node addons-20211019231643-6808 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x4 over 10m)  kubelet  Node addons-20211019231643-6808 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m                kubelet  Node addons-20211019231643-6808 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                kubelet  Node addons-20211019231643-6808 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                kubelet  Node addons-20211019231643-6808 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m54s              kubelet  Node addons-20211019231643-6808 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.000002] ll header: 00000000: ff ff ff ff ff ff 22 05 55 66 9c a5 08 06        ......".Uf....
	[Oct19 23:19] IPv4: martian source 10.244.0.26 from 10.244.0.26, on dev veth3ccd7c3b
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff f6 7c ee 7b ce db 08 06        .......|.{....
	[  +2.197153] IPv4: martian source 10.244.0.27 from 10.244.0.27, on dev veth6c2695de
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 5e 6a 49 93 cd 1f 08 06        ......^jI.....
	[  +3.247529] IPv4: martian source 10.244.0.28 from 10.244.0.28, on dev vetha847282f
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff ae e6 19 a6 4f b2 08 06        ..........O...
	[  +1.455241] IPv4: martian source 10.244.0.29 from 10.244.0.29, on dev veth3f8662e4
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 46 66 52 ec c1 b5 08 06        ......FfR.....
	[  +4.495940] IPv4: martian source 10.244.0.30 from 10.244.0.30, on dev vethfcb87032
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 1a 35 cb 39 54 15 08 06        .......5.9T...
	[ +11.643144] IPv4: martian source 10.244.0.31 from 10.244.0.31, on dev vetha80d753a
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff aa b7 42 75 59 4a 08 06        ........BuYJ..
	[  +4.903466] IPv4: martian source 10.244.0.32 from 10.244.0.32, on dev vetha16152ee
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff aa e7 81 55 47 7e 08 06        .........UG~..
	[  +0.179569] IPv4: martian source 10.244.0.33 from 10.244.0.33, on dev vetha7e77264
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff e6 c4 86 62 3b f0 08 06        .........b;...
	[  +0.000549] IPv4: martian source 10.244.0.34 from 10.244.0.34, on dev vethee731ba8
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 1e f3 f2 37 d4 25 08 06        .........7.%!.(MISSING)
	[  +8.731224] IPv4: martian source 10.244.0.35 from 10.244.0.35, on dev veth44d596ae
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff da dd f9 4e 69 d9 08 06        .........Ni...
	[ +17.442401] IPv4: martian source 10.244.0.36 from 10.244.0.36, on dev veth07238d0d
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 72 26 1f 5b 37 20 08 06        ......r&.[7 ..
	[Oct19 23:20] IPv4: martian source 10.244.0.37 from 10.244.0.37, on dev veth95e2ecf9
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff c6 62 cd db d6 51 08 06        .......b...Q..
	
	* 
	* ==> etcd [8fc82401f1226b1ff162784ac56da5ab399902ab07700489e5f8374920469fdb] <==
	* {"level":"info","ts":"2021-10-19T23:19:19.437Z","caller":"traceutil/trace.go:171","msg":"trace[369842793] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1596; }","duration":"898.352547ms","start":"2021-10-19T23:19:18.538Z","end":"2021-10-19T23:19:19.437Z","steps":["trace[369842793] 'range keys from in-memory index tree'  (duration: 897.988649ms)"],"step_count":1}
	{"level":"warn","ts":"2021-10-19T23:19:19.437Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-10-19T23:19:18.538Z","time spent":"898.385782ms","remote":"127.0.0.1:33742","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":2,"response size":5744,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
	{"level":"info","ts":"2021-10-19T23:19:19.653Z","caller":"traceutil/trace.go:171","msg":"trace[805638954] linearizableReadLoop","detail":"{readStateIndex:1679; appliedIndex:1678; }","duration":"206.450732ms","start":"2021-10-19T23:19:19.447Z","end":"2021-10-19T23:19:19.653Z","steps":["trace[805638954] 'read index received'  (duration: 206.264281ms)","trace[805638954] 'applied index is now lower than readState.Index'  (duration: 185.789µs)"],"step_count":2}
	{"level":"info","ts":"2021-10-19T23:19:19.653Z","caller":"traceutil/trace.go:171","msg":"trace[1359860439] transaction","detail":"{read_only:false; response_revision:1598; number_of_response:1; }","duration":"209.674828ms","start":"2021-10-19T23:19:19.444Z","end":"2021-10-19T23:19:19.653Z","steps":["trace[1359860439] 'process raft request'  (duration: 209.469734ms)"],"step_count":1}
	{"level":"warn","ts":"2021-10-19T23:19:19.653Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"206.613581ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/operators.coreos.com/subscriptions/my-etcd/\" range_end:\"/registry/operators.coreos.com/subscriptions/my-etcd0\" ","response":"range_response_count:1 size:2651"}
	{"level":"info","ts":"2021-10-19T23:19:19.653Z","caller":"traceutil/trace.go:171","msg":"trace[465058001] range","detail":"{range_begin:/registry/operators.coreos.com/subscriptions/my-etcd/; range_end:/registry/operators.coreos.com/subscriptions/my-etcd0; response_count:1; response_revision:1598; }","duration":"206.667574ms","start":"2021-10-19T23:19:19.447Z","end":"2021-10-19T23:19:19.653Z","steps":["trace[465058001] 'agreement among raft nodes before linearized reading'  (duration: 206.545109ms)"],"step_count":1}
	{"level":"warn","ts":"2021-10-19T23:19:19.653Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"206.637098ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/operators.coreos.com/operatorgroups/my-etcd/\" range_end:\"/registry/operators.coreos.com/operatorgroups/my-etcd0\" ","response":"range_response_count:1 size:919"}
	{"level":"info","ts":"2021-10-19T23:19:19.654Z","caller":"traceutil/trace.go:171","msg":"trace[1309035744] range","detail":"{range_begin:/registry/operators.coreos.com/operatorgroups/my-etcd/; range_end:/registry/operators.coreos.com/operatorgroups/my-etcd0; response_count:1; response_revision:1598; }","duration":"206.776478ms","start":"2021-10-19T23:19:19.447Z","end":"2021-10-19T23:19:19.654Z","steps":["trace[1309035744] 'agreement among raft nodes before linearized reading'  (duration: 206.578987ms)"],"step_count":1}
	{"level":"warn","ts":"2021-10-19T23:19:19.655Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"206.644451ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/operators.coreos.com/operators/cluster-manager.my-etcd\" ","response":"range_response_count:1 size:1445"}
	{"level":"info","ts":"2021-10-19T23:19:19.655Z","caller":"traceutil/trace.go:171","msg":"trace[1306668901] range","detail":"{range_begin:/registry/operators.coreos.com/operators/cluster-manager.my-etcd; range_end:; response_count:1; response_revision:1598; }","duration":"206.690469ms","start":"2021-10-19T23:19:19.448Z","end":"2021-10-19T23:19:19.655Z","steps":["trace[1306668901] 'agreement among raft nodes before linearized reading'  (duration: 206.613803ms)"],"step_count":1}
	{"level":"warn","ts":"2021-10-19T23:19:19.655Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"115.920791ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:5720"}
	{"level":"info","ts":"2021-10-19T23:19:19.655Z","caller":"traceutil/trace.go:171","msg":"trace[289137602] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1598; }","duration":"115.952345ms","start":"2021-10-19T23:19:19.539Z","end":"2021-10-19T23:19:19.655Z","steps":["trace[289137602] 'agreement among raft nodes before linearized reading'  (duration: 115.883626ms)"],"step_count":1}
	{"level":"warn","ts":"2021-10-19T23:19:55.998Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"284.632058ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128008431990289034 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/volumesnapshotclasses.snapshot.storage.k8s.io\" mod_revision:2088 > success:<request_delete_range:<key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/volumesnapshotclasses.snapshot.storage.k8s.io\" > > failure:<request_range:<key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/volumesnapshotclasses.snapshot.storage.k8s.io\" > >>","response":"size:18"}
	{"level":"info","ts":"2021-10-19T23:19:55.998Z","caller":"traceutil/trace.go:171","msg":"trace[112099378] transaction","detail":"{read_only:false; number_of_response:1; response_revision:2090; }","duration":"285.016262ms","start":"2021-10-19T23:19:55.713Z","end":"2021-10-19T23:19:55.998Z","steps":["trace[112099378] 'compare'  (duration: 284.546157ms)"],"step_count":1}
	{"level":"info","ts":"2021-10-19T23:19:55.998Z","caller":"traceutil/trace.go:171","msg":"trace[1483209631] linearizableReadLoop","detail":"{readStateIndex:2195; appliedIndex:2194; }","duration":"281.390051ms","start":"2021-10-19T23:19:55.716Z","end":"2021-10-19T23:19:55.998Z","steps":["trace[1483209631] 'read index received'  (duration: 279.078466ms)","trace[1483209631] 'applied index is now lower than readState.Index'  (duration: 2.308597ms)"],"step_count":2}
	{"level":"info","ts":"2021-10-19T23:19:55.998Z","caller":"traceutil/trace.go:171","msg":"trace[1958566628] transaction","detail":"{read_only:false; response_revision:2091; number_of_response:1; }","duration":"278.661786ms","start":"2021-10-19T23:19:55.719Z","end":"2021-10-19T23:19:55.998Z","steps":["trace[1958566628] 'process raft request'  (duration: 278.477338ms)"],"step_count":1}
	{"level":"warn","ts":"2021-10-19T23:19:55.998Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"257.103932ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/jobs/ingress-nginx/ingress-nginx-admission-create\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2021-10-19T23:19:55.998Z","caller":"traceutil/trace.go:171","msg":"trace[1193979766] range","detail":"{range_begin:/registry/jobs/ingress-nginx/ingress-nginx-admission-create; range_end:; response_count:0; response_revision:2092; }","duration":"257.161855ms","start":"2021-10-19T23:19:55.741Z","end":"2021-10-19T23:19:55.998Z","steps":["trace[1193979766] 'agreement among raft nodes before linearized reading'  (duration: 257.099874ms)"],"step_count":1}
	{"level":"warn","ts":"2021-10-19T23:19:55.998Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"281.672728ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/volumesnapshots.snapshot.storage.k8s.io\" ","response":"range_response_count:1 size:36208"}
	{"level":"info","ts":"2021-10-19T23:19:55.998Z","caller":"traceutil/trace.go:171","msg":"trace[2037152323] transaction","detail":"{read_only:false; response_revision:2092; number_of_response:1; }","duration":"123.412469ms","start":"2021-10-19T23:19:55.875Z","end":"2021-10-19T23:19:55.998Z","steps":["trace[2037152323] 'process raft request'  (duration: 123.12993ms)"],"step_count":1}
	{"level":"info","ts":"2021-10-19T23:19:55.998Z","caller":"traceutil/trace.go:171","msg":"trace[1571883001] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions/volumesnapshots.snapshot.storage.k8s.io; range_end:; response_count:1; response_revision:2092; }","duration":"281.723879ms","start":"2021-10-19T23:19:55.716Z","end":"2021-10-19T23:19:55.998Z","steps":["trace[1571883001] 'agreement among raft nodes before linearized reading'  (duration: 281.492821ms)"],"step_count":1}
	{"level":"warn","ts":"2021-10-19T23:19:55.998Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"158.960306ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" ","response":"range_response_count:1 size:8708"}
	{"level":"info","ts":"2021-10-19T23:19:55.998Z","caller":"traceutil/trace.go:171","msg":"trace[978429933] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:1; response_revision:2092; }","duration":"159.00445ms","start":"2021-10-19T23:19:55.839Z","end":"2021-10-19T23:19:55.998Z","steps":["trace[978429933] 'agreement among raft nodes before linearized reading'  (duration: 158.943069ms)"],"step_count":1}
	{"level":"info","ts":"2021-10-19T23:27:10.771Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2284}
	{"level":"info","ts":"2021-10-19T23:27:10.810Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":2284,"took":"37.518791ms"}
	
	* 
	* ==> kernel <==
	*  23:28:07 up  1:10,  0 users,  load average: 0.26, 0.27, 0.24
	Linux addons-20211019231643-6808 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [065dd3359c8746fe98c320a9505385cb18296062bcc115c4087f90278601cc5c] <==
	* Trace[1075524622]: ---"About to write a response" 584ms (23:19:19.437)
	Trace[1075524622]: [584.660421ms] [584.660421ms] END
	I1019 23:19:19.438352       1 trace.go:205] Trace[1584905418]: "List etcd3" key:/operators.coreos.com/operatorgroups/my-etcd,resourceVersion:,resourceVersionMatch:,limit:0,continue: (19-Oct-2021 23:19:18.522) (total time: 915ms):
	Trace[1584905418]: [915.90865ms] [915.90865ms] END
	I1019 23:19:19.438474       1 trace.go:205] Trace[98557183]: "List etcd3" key:/pods/default,resourceVersion:,resourceVersionMatch:,limit:0,continue: (19-Oct-2021 23:19:18.538) (total time: 899ms):
	Trace[98557183]: [899.859248ms] [899.859248ms] END
	I1019 23:19:19.438509       1 trace.go:205] Trace[2113583469]: "List etcd3" key:/pods/default,resourceVersion:,resourceVersionMatch:,limit:0,continue: (19-Oct-2021 23:19:18.818) (total time: 619ms):
	Trace[2113583469]: [619.712289ms] [619.712289ms] END
	I1019 23:19:19.438544       1 trace.go:205] Trace[1077687729]: "List" url:/apis/operators.coreos.com/v1/namespaces/my-etcd/operatorgroups,user-agent:catalog/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:76de57ca-c2a6-4931-8123-718be625ab57,client:10.244.0.10,accept:application/json, */*,protocol:HTTP/2.0 (19-Oct-2021 23:19:18.522) (total time: 916ms):
	Trace[1077687729]: ---"Listing from storage done" 915ms (23:19:19.438)
	Trace[1077687729]: [916.140938ms] [916.140938ms] END
	I1019 23:19:19.438910       1 trace.go:205] Trace[1991526596]: "List" url:/api/v1/namespaces/default/pods,user-agent:e2e-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:aa6e2b99-6e60-4266-89ca-ea8e856dc879,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (19-Oct-2021 23:19:18.538) (total time: 900ms):
	Trace[1991526596]: ---"Listing from storage done" 899ms (23:19:19.438)
	Trace[1991526596]: [900.310418ms] [900.310418ms] END
	I1019 23:19:19.438990       1 trace.go:205] Trace[248988213]: "List" url:/api/v1/namespaces/default/pods,user-agent:e2e-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:3e09dcec-e47f-4feb-8837-540f842b634c,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (19-Oct-2021 23:19:18.818) (total time: 620ms):
	Trace[248988213]: ---"Listing from storage done" 619ms (23:19:19.438)
	Trace[248988213]: [620.205192ms] [620.205192ms] END
	I1019 23:19:20.687327       1 controller.go:611] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1019 23:19:25.411966       1 controller.go:132] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	E1019 23:19:26.912958       1 watch.go:251] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoder{writer:(*http2.responseWriter)(0xc012e90e70), encoder:(*versioning.codec)(0xc017160e60), buf:(*bytes.Buffer)(0xc0172a92c0)})
	E1019 23:19:36.745051       1 authentication.go:63] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	E1019 23:19:49.313714       1 watch.go:251] unable to encode watch object *v1.WatchEvent: client disconnected (&streaming.encoder{writer:(*http2.responseWriter)(0xc0120b9238), encoder:(*versioning.codec)(0xc00f192000), buf:(*bytes.Buffer)(0xc00ce13d40)})
	W1019 23:19:57.000419       1 cacher.go:149] Terminating all watchers from cacher *unstructured.Unstructured
	W1019 23:19:57.018471       1 cacher.go:149] Terminating all watchers from cacher *unstructured.Unstructured
	W1019 23:19:57.026639       1 cacher.go:149] Terminating all watchers from cacher *unstructured.Unstructured
	
	* 
	* ==> kube-controller-manager [0e18aa12c3e03fbb375896ea6f652252aca31ee2d4f9bb0c38f1f85be89ea757] <==
	* E1019 23:22:02.406403       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1019 23:22:34.289536       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1019 23:22:54.540078       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1019 23:22:58.484531       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1019 23:23:15.176928       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1019 23:23:30.278181       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1019 23:23:50.699269       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1019 23:23:51.279010       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1019 23:24:06.438990       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1019 23:24:23.723577       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1019 23:24:44.695163       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1019 23:24:55.242547       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1019 23:25:06.192337       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1019 23:25:30.510423       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1019 23:25:33.836736       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1019 23:25:47.138654       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1019 23:26:07.162857       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1019 23:26:20.038355       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1019 23:26:45.075050       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1019 23:26:45.634402       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1019 23:27:09.048853       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1019 23:27:18.650531       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1019 23:27:21.798127       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1019 23:27:52.559506       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1019 23:27:56.359205       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [52050f6348a87738cc5abbe1014a52adabc114a6cb7aaf559102f34fca2589ad] <==
	* I1019 23:17:35.713728       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I1019 23:17:35.713827       1 server_others.go:140] Detected node IP 192.168.49.2
	W1019 23:17:35.713854       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
	I1019 23:17:36.217926       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I1019 23:17:36.217968       1 server_others.go:212] Using iptables Proxier.
	I1019 23:17:36.217981       1 server_others.go:219] creating dualStackProxier for iptables.
	W1019 23:17:36.217995       1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I1019 23:17:36.218259       1 server.go:649] Version: v1.22.2
	I1019 23:17:36.219879       1 config.go:224] Starting endpoint slice config controller
	I1019 23:17:36.219901       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I1019 23:17:36.220108       1 config.go:315] Starting service config controller
	I1019 23:17:36.220118       1 shared_informer.go:240] Waiting for caches to sync for service config
	E1019 23:17:36.229518       1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"addons-20211019231643-6808.16af91c2f169cd4b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc053f2640d09273b, ext:997531382, loc:(*time.Location)(0x2d81340)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-addons-20211019231643-6808", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"addons-
20211019231643-6808", UID:"addons-20211019231643-6808", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "addons-20211019231643-6808.16af91c2f169cd4b" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
	I1019 23:17:36.321084       1 shared_informer.go:247] Caches are synced for service config 
	I1019 23:17:36.321104       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [04074056b1c8096e233450831f7a72ab5486c286032659707280b6addb39f33f] <==
	* I1019 23:17:13.717080       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 23:17:13.717118       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1019 23:17:13.717150       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E1019 23:17:13.719480       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1019 23:17:13.719587       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1019 23:17:13.719879       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1019 23:17:13.719892       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1019 23:17:13.720001       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1019 23:17:13.720009       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1019 23:17:13.720093       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1019 23:17:13.720112       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1019 23:17:13.720167       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1019 23:17:13.720208       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1019 23:17:13.720220       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1019 23:17:13.720298       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1019 23:17:13.720310       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1019 23:17:13.720382       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1019 23:17:13.720512       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1019 23:17:14.566802       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1019 23:17:14.600945       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1019 23:17:14.715185       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1019 23:17:14.780847       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1019 23:17:14.812677       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1019 23:17:14.830919       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I1019 23:17:15.317933       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2021-10-19 23:16:52 UTC, end at Tue 2021-10-19 23:28:07 UTC. --
	Oct 19 23:25:05 addons-20211019231643-6808 kubelet[1214]: E1019 23:25:05.621678    1214 pod_workers.go:765] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"private-image\" with ImagePullBackOff: \"Back-off pulling image \\\"us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver:1.4\\\"\"" pod="default/private-image-7ff9c8c74f-4ttlt" podUID=3b1faae0-8021-458a-80e6-050c71b305b7
	Oct 19 23:25:20 addons-20211019231643-6808 kubelet[1214]: E1019 23:25:20.625289    1214 pod_workers.go:765] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"private-image\" with ImagePullBackOff: \"Back-off pulling image \\\"us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver:1.4\\\"\"" pod="default/private-image-7ff9c8c74f-4ttlt" podUID=3b1faae0-8021-458a-80e6-050c71b305b7
	Oct 19 23:25:32 addons-20211019231643-6808 kubelet[1214]: E1019 23:25:32.622013    1214 pod_workers.go:765] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"private-image\" with ImagePullBackOff: \"Back-off pulling image \\\"us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver:1.4\\\"\"" pod="default/private-image-7ff9c8c74f-4ttlt" podUID=3b1faae0-8021-458a-80e6-050c71b305b7
	Oct 19 23:25:39 addons-20211019231643-6808 kubelet[1214]: I1019 23:25:39.621445    1214 kubelet_pods.go:897] "Unable to retrieve pull secret, the image pull may not succeed." pod="my-etcd/cluster-manager-794c6cc889-csvcw" secret="" err="secret \"gcp-auth\" not found"
	Oct 19 23:25:43 addons-20211019231643-6808 kubelet[1214]: I1019 23:25:43.621672    1214 kubelet_pods.go:897] "Unable to retrieve pull secret, the image pull may not succeed." pod="my-etcd/cluster-manager-794c6cc889-snlxr" secret="" err="secret \"gcp-auth\" not found"
	Oct 19 23:25:45 addons-20211019231643-6808 kubelet[1214]: E1019 23:25:45.814305    1214 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver:1.4\": failed to resolve reference \"us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver:1.4\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver:1.4"
	Oct 19 23:25:45 addons-20211019231643-6808 kubelet[1214]: E1019 23:25:45.814347    1214 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver:1.4\": failed to resolve reference \"us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver:1.4\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver:1.4"
	Oct 19 23:25:45 addons-20211019231643-6808 kubelet[1214]: E1019 23:25:45.814449    1214 kuberuntime_manager.go:895] container &Container{Name:private-image,Image:us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver:1.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:GOOGLE_APPLICATION_CREDENTIALS,Value:/google-app-creds.json,ValueFrom:nil,},EnvVar{Name:PROJECT_ID,Value:k8s-minikube,ValueFrom:nil,},EnvVar{Name:GCP_PROJECT,Value:k8s-minikube,ValueFrom:nil,},EnvVar{Name:GCLOUD_PROJECT,Value:k8s-minikube,ValueFrom:nil,},EnvVar{Name:GOOGLE_CLOUD_PROJECT,Value:k8s-minikube,ValueFrom:nil,},EnvVar{Name:CLOUDSDK_CORE_PROJECT,Value:k8s-minikube,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wj62d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:gcp-creds,ReadOnly:true,MountPath:/google
-app-creds.json,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod private-image-7ff9c8c74f-4ttlt_default(3b1faae0-8021-458a-80e6-050c71b305b7): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver:1.4": failed to resolve reference "us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver:1.4": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden
	Oct 19 23:25:45 addons-20211019231643-6808 kubelet[1214]: E1019 23:25:45.814497    1214 pod_workers.go:765] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"private-image\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver:1.4\\\": failed to resolve reference \\\"us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver:1.4\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"" pod="default/private-image-7ff9c8c74f-4ttlt" podUID=3b1faae0-8021-458a-80e6-050c71b305b7
	Oct 19 23:25:56 addons-20211019231643-6808 kubelet[1214]: I1019 23:25:56.620821    1214 kubelet_pods.go:897] "Unable to retrieve pull secret, the image pull may not succeed." pod="my-etcd/cluster-manager-794c6cc889-gmjg2" secret="" err="secret \"gcp-auth\" not found"
	Oct 19 23:25:59 addons-20211019231643-6808 kubelet[1214]: E1019 23:25:59.621889    1214 pod_workers.go:765] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"private-image\" with ImagePullBackOff: \"Back-off pulling image \\\"us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver:1.4\\\"\"" pod="default/private-image-7ff9c8c74f-4ttlt" podUID=3b1faae0-8021-458a-80e6-050c71b305b7
	Oct 19 23:26:13 addons-20211019231643-6808 kubelet[1214]: E1019 23:26:13.622245    1214 pod_workers.go:765] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"private-image\" with ImagePullBackOff: \"Back-off pulling image \\\"us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver:1.4\\\"\"" pod="default/private-image-7ff9c8c74f-4ttlt" podUID=3b1faae0-8021-458a-80e6-050c71b305b7
	Oct 19 23:26:28 addons-20211019231643-6808 kubelet[1214]: E1019 23:26:28.621934    1214 pod_workers.go:765] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"private-image\" with ImagePullBackOff: \"Back-off pulling image \\\"us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver:1.4\\\"\"" pod="default/private-image-7ff9c8c74f-4ttlt" podUID=3b1faae0-8021-458a-80e6-050c71b305b7
	Oct 19 23:26:41 addons-20211019231643-6808 kubelet[1214]: E1019 23:26:41.622059    1214 pod_workers.go:765] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"private-image\" with ImagePullBackOff: \"Back-off pulling image \\\"us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver:1.4\\\"\"" pod="default/private-image-7ff9c8c74f-4ttlt" podUID=3b1faae0-8021-458a-80e6-050c71b305b7
	Oct 19 23:26:43 addons-20211019231643-6808 kubelet[1214]: I1019 23:26:43.621279    1214 kubelet_pods.go:897] "Unable to retrieve pull secret, the image pull may not succeed." pod="my-etcd/cluster-manager-794c6cc889-csvcw" secret="" err="secret \"gcp-auth\" not found"
	Oct 19 23:26:49 addons-20211019231643-6808 kubelet[1214]: I1019 23:26:49.621613    1214 kubelet_pods.go:897] "Unable to retrieve pull secret, the image pull may not succeed." pod="my-etcd/cluster-manager-794c6cc889-snlxr" secret="" err="secret \"gcp-auth\" not found"
	Oct 19 23:26:54 addons-20211019231643-6808 kubelet[1214]: E1019 23:26:54.621993    1214 pod_workers.go:765] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"private-image\" with ImagePullBackOff: \"Back-off pulling image \\\"us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver:1.4\\\"\"" pod="default/private-image-7ff9c8c74f-4ttlt" podUID=3b1faae0-8021-458a-80e6-050c71b305b7
	Oct 19 23:27:08 addons-20211019231643-6808 kubelet[1214]: I1019 23:27:08.621184    1214 kubelet_pods.go:897] "Unable to retrieve pull secret, the image pull may not succeed." pod="my-etcd/cluster-manager-794c6cc889-gmjg2" secret="" err="secret \"gcp-auth\" not found"
	Oct 19 23:27:09 addons-20211019231643-6808 kubelet[1214]: E1019 23:27:09.622186    1214 pod_workers.go:765] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"private-image\" with ImagePullBackOff: \"Back-off pulling image \\\"us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver:1.4\\\"\"" pod="default/private-image-7ff9c8c74f-4ttlt" podUID=3b1faae0-8021-458a-80e6-050c71b305b7
	Oct 19 23:27:22 addons-20211019231643-6808 kubelet[1214]: E1019 23:27:22.621934    1214 pod_workers.go:765] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"private-image\" with ImagePullBackOff: \"Back-off pulling image \\\"us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver:1.4\\\"\"" pod="default/private-image-7ff9c8c74f-4ttlt" podUID=3b1faae0-8021-458a-80e6-050c71b305b7
	Oct 19 23:27:36 addons-20211019231643-6808 kubelet[1214]: E1019 23:27:36.621362    1214 pod_workers.go:765] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"private-image\" with ImagePullBackOff: \"Back-off pulling image \\\"us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver:1.4\\\"\"" pod="default/private-image-7ff9c8c74f-4ttlt" podUID=3b1faae0-8021-458a-80e6-050c71b305b7
	Oct 19 23:27:51 addons-20211019231643-6808 kubelet[1214]: E1019 23:27:51.622336    1214 pod_workers.go:765] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"private-image\" with ImagePullBackOff: \"Back-off pulling image \\\"us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver:1.4\\\"\"" pod="default/private-image-7ff9c8c74f-4ttlt" podUID=3b1faae0-8021-458a-80e6-050c71b305b7
	Oct 19 23:27:58 addons-20211019231643-6808 kubelet[1214]: I1019 23:27:58.621170    1214 kubelet_pods.go:897] "Unable to retrieve pull secret, the image pull may not succeed." pod="my-etcd/cluster-manager-794c6cc889-csvcw" secret="" err="secret \"gcp-auth\" not found"
	Oct 19 23:28:05 addons-20211019231643-6808 kubelet[1214]: I1019 23:28:05.621668    1214 kubelet_pods.go:897] "Unable to retrieve pull secret, the image pull may not succeed." pod="my-etcd/cluster-manager-794c6cc889-snlxr" secret="" err="secret \"gcp-auth\" not found"
	Oct 19 23:28:05 addons-20211019231643-6808 kubelet[1214]: E1019 23:28:05.622615    1214 pod_workers.go:765] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"private-image\" with ImagePullBackOff: \"Back-off pulling image \\\"us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver:1.4\\\"\"" pod="default/private-image-7ff9c8c74f-4ttlt" podUID=3b1faae0-8021-458a-80e6-050c71b305b7
	
	* 
	* ==> storage-provisioner [5bb61fdfaef4d094fea2ba1f08268367abea3f6b77e2fcc6d3435f07d0b77b64] <==
	* I1019 23:18:16.714172       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1019 23:18:16.728175       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1019 23:18:16.728214       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1019 23:18:16.738801       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1019 23:18:16.738850       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e0ed9016-4205-4fc6-9f0b-e1ec8542abba", APIVersion:"v1", ResourceVersion:"931", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-20211019231643-6808_65601e4d-7637-4f35-962f-856b9d0fa7c6 became leader
	I1019 23:18:16.738932       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-20211019231643-6808_65601e4d-7637-4f35-962f-856b9d0fa7c6!
	I1019 23:18:16.839082       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-20211019231643-6808_65601e4d-7637-4f35-962f-856b9d0fa7c6!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-20211019231643-6808 -n addons-20211019231643-6808
helpers_test.go:261: (dbg) Run:  kubectl --context addons-20211019231643-6808 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: private-image-7ff9c8c74f-4ttlt 4b15913fc7680de4a89b21d8e9a73b9867ae32e6b05f6cc204fe5d--1-s9jsp
helpers_test.go:272: ======> post-mortem[TestAddons/serial/GCPAuth]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context addons-20211019231643-6808 describe pod private-image-7ff9c8c74f-4ttlt 4b15913fc7680de4a89b21d8e9a73b9867ae32e6b05f6cc204fe5d--1-s9jsp
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context addons-20211019231643-6808 describe pod private-image-7ff9c8c74f-4ttlt 4b15913fc7680de4a89b21d8e9a73b9867ae32e6b05f6cc204fe5d--1-s9jsp: exit status 1 (70.427047ms)

                                                
                                                
-- stdout --
	Name:         private-image-7ff9c8c74f-4ttlt
	Namespace:    default
	Priority:     0
	Node:         addons-20211019231643-6808/192.168.49.2
	Start Time:   Tue, 19 Oct 2021 23:20:05 +0000
	Labels:       integration-test=private-image
	              pod-template-hash=7ff9c8c74f
	Annotations:  <none>
	Status:       Pending
	IP:           10.244.0.37
	IPs:
	  IP:           10.244.0.37
	Controlled By:  ReplicaSet/private-image-7ff9c8c74f
	Containers:
	  private-image:
	    Container ID:   
	    Image:          us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver:1.4
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      k8s-minikube
	      GCP_PROJECT:                     k8s-minikube
	      GCLOUD_PROJECT:                  k8s-minikube
	      GOOGLE_CLOUD_PROJECT:            k8s-minikube
	      CLOUDSDK_CORE_PROJECT:           k8s-minikube
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wj62d (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  kube-api-access-wj62d:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  8m2s                   default-scheduler  Successfully assigned default/private-image-7ff9c8c74f-4ttlt to addons-20211019231643-6808
	  Normal   Pulling    6m29s (x4 over 8m2s)   kubelet            Pulling image "us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver:1.4"
	  Warning  Failed     6m29s (x4 over 8m2s)   kubelet            Failed to pull image "us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver:1.4": rpc error: code = Unknown desc = failed to pull and unpack image "us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver:1.4": failed to resolve reference "us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver:1.4": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden
	  Warning  Failed     6m29s (x4 over 8m2s)   kubelet            Error: ErrImagePull
	  Warning  Failed     6m15s (x6 over 8m1s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    2m47s (x21 over 8m1s)  kubelet            Back-off pulling image "us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver:1.4"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "4b15913fc7680de4a89b21d8e9a73b9867ae32e6b05f6cc204fe5d--1-s9jsp" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context addons-20211019231643-6808 describe pod private-image-7ff9c8c74f-4ttlt 4b15913fc7680de4a89b21d8e9a73b9867ae32e6b05f6cc204fe5d--1-s9jsp: exit status 1
--- FAIL: TestAddons/serial/GCPAuth (491.43s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1190: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 logs --file /tmp/functional-20211019232952-68081477499330/logs.txt
functional_test.go:1190: (dbg) Done: out/minikube-linux-amd64 -p functional-20211019232952-6808 logs --file /tmp/functional-20211019232952-68081477499330/logs.txt: (1.003512696s)
functional_test.go:1195: expected empty minikube logs output, but got: 
***
-- stdout --
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1019 23:32:09.388842   39361 logs.go:190] command /bin/bash -c "sudo /usr/bin/crictl logs --tail 60 0605b57aae11f41c9325d5a3e76e68ec1e64c6b38b67cf1ad6a33625b7c41695" failed with error: /bin/bash -c "sudo /usr/bin/crictl logs --tail 60 0605b57aae11f41c9325d5a3e76e68ec1e64c6b38b67cf1ad6a33625b7c41695": Process exited with status 1
	stdout:
	
	stderr:
	time="2021-10-19T23:32:09Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-78fcd69978-q2vbn_31ac401f-a50c-4c2c-9f3d-6dad69101449/coredns/1.log\": lstat /var/log/pods/kube-system_coredns-78fcd69978-q2vbn_31ac401f-a50c-4c2c-9f3d-6dad69101449/coredns/1.log: no such file or directory"
	 output: "\n** stderr ** \ntime=\"2021-10-19T23:32:09Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_coredns-78fcd69978-q2vbn_31ac401f-a50c-4c2c-9f3d-6dad69101449/coredns/1.log\\\": lstat /var/log/pods/kube-system_coredns-78fcd69978-q2vbn_31ac401f-a50c-4c2c-9f3d-6dad69101449/coredns/1.log: no such file or directory\"\n\n** /stderr **"
	E1019 23:32:09.530544   39361 logs.go:190] command /bin/bash -c "sudo /usr/bin/crictl logs --tail 60 26ea40ab7d60a4ab85a2df08d876563f5830188a6ff9ef57dbaf4f2664fd043e" failed with error: /bin/bash -c "sudo /usr/bin/crictl logs --tail 60 26ea40ab7d60a4ab85a2df08d876563f5830188a6ff9ef57dbaf4f2664fd043e": Process exited with status 1
	stdout:
	
	stderr:
	time="2021-10-19T23:32:09Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_etcd-functional-20211019232952-6808_1c1baa1e16069c8299f842ff5ddaa1f7/etcd/1.log\": lstat /var/log/pods/kube-system_etcd-functional-20211019232952-6808_1c1baa1e16069c8299f842ff5ddaa1f7/etcd/1.log: no such file or directory"
	 output: "\n** stderr ** \ntime=\"2021-10-19T23:32:09Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_etcd-functional-20211019232952-6808_1c1baa1e16069c8299f842ff5ddaa1f7/etcd/1.log\\\": lstat /var/log/pods/kube-system_etcd-functional-20211019232952-6808_1c1baa1e16069c8299f842ff5ddaa1f7/etcd/1.log: no such file or directory\"\n\n** /stderr **"
	E1019 23:32:09.617561   39361 logs.go:190] command /bin/bash -c "sudo /usr/bin/crictl logs --tail 60 1f07e40be30ee71a5a218a0ac6cf3738d894ec7ee41418530e9be1f569667821" failed with error: /bin/bash -c "sudo /usr/bin/crictl logs --tail 60 1f07e40be30ee71a5a218a0ac6cf3738d894ec7ee41418530e9be1f569667821": Process exited with status 1
	stdout:
	
	stderr:
	time="2021-10-19T23:32:09Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_kube-controller-manager-functional-20211019232952-6808_72e8d1cd7ea4bd2d645412998dabd26d/kube-controller-manager/1.log\": lstat /var/log/pods/kube-system_kube-controller-manager-functional-20211019232952-6808_72e8d1cd7ea4bd2d645412998dabd26d/kube-controller-manager/1.log: no such file or directory"
	 output: "\n** stderr ** \ntime=\"2021-10-19T23:32:09Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_kube-controller-manager-functional-20211019232952-6808_72e8d1cd7ea4bd2d645412998dabd26d/kube-controller-manager/1.log\\\": lstat /var/log/pods/kube-system_kube-controller-manager-functional-20211019232952-6808_72e8d1cd7ea4bd2d645412998dabd26d/kube-controller-manager/1.log: no such file or directory\"\n\n** /stderr **"
	E1019 23:32:09.667951   39361 logs.go:190] command /bin/bash -c "sudo /usr/bin/crictl logs --tail 60 4c1a817901d111851a5e18b472473200fc8ee0ad5b6a1a5601e17c7044df1a36" failed with error: /bin/bash -c "sudo /usr/bin/crictl logs --tail 60 4c1a817901d111851a5e18b472473200fc8ee0ad5b6a1a5601e17c7044df1a36": Process exited with status 1
	stdout:
	
	stderr:
	time="2021-10-19T23:32:09Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_kube-proxy-p4tjk_bb256ffe-e33b-430f-ad05-731b42c86d22/kube-proxy/1.log\": lstat /var/log/pods/kube-system_kube-proxy-p4tjk_bb256ffe-e33b-430f-ad05-731b42c86d22/kube-proxy/1.log: no such file or directory"
	 output: "\n** stderr ** \ntime=\"2021-10-19T23:32:09Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_kube-proxy-p4tjk_bb256ffe-e33b-430f-ad05-731b42c86d22/kube-proxy/1.log\\\": lstat /var/log/pods/kube-system_kube-proxy-p4tjk_bb256ffe-e33b-430f-ad05-731b42c86d22/kube-proxy/1.log: no such file or directory\"\n\n** /stderr **"
	E1019 23:32:09.717074   39361 logs.go:190] command /bin/bash -c "sudo /usr/bin/crictl logs --tail 60 db3c26ef77419809adcad9f78497ebe74671d91829b7c1cc952129fd23bb5969" failed with error: /bin/bash -c "sudo /usr/bin/crictl logs --tail 60 db3c26ef77419809adcad9f78497ebe74671d91829b7c1cc952129fd23bb5969": Process exited with status 1
	stdout:
	
	stderr:
	time="2021-10-19T23:32:09Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_kube-scheduler-functional-20211019232952-6808_57db3acdf8bbcdd1bcabf96247ea940c/kube-scheduler/1.log\": lstat /var/log/pods/kube-system_kube-scheduler-functional-20211019232952-6808_57db3acdf8bbcdd1bcabf96247ea940c/kube-scheduler/1.log: no such file or directory"
	 output: "\n** stderr ** \ntime=\"2021-10-19T23:32:09Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_kube-scheduler-functional-20211019232952-6808_57db3acdf8bbcdd1bcabf96247ea940c/kube-scheduler/1.log\\\": lstat /var/log/pods/kube-system_kube-scheduler-functional-20211019232952-6808_57db3acdf8bbcdd1bcabf96247ea940c/kube-scheduler/1.log: no such file or directory\"\n\n** /stderr **"
	E1019 23:32:09.810416   39361 logs.go:190] command /bin/bash -c "sudo /usr/bin/crictl logs --tail 60 aad3662df36946428c02b080b9faefbda8bf1f6ba0635384593321e06fb7a49f" failed with error: /bin/bash -c "sudo /usr/bin/crictl logs --tail 60 aad3662df36946428c02b080b9faefbda8bf1f6ba0635384593321e06fb7a49f": Process exited with status 1
	stdout:
	
	stderr:
	time="2021-10-19T23:32:09Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_storage-provisioner_4de9e5f9-67b1-460f-a461-9b4aba1dcd35/storage-provisioner/1.log\": lstat /var/log/pods/kube-system_storage-provisioner_4de9e5f9-67b1-460f-a461-9b4aba1dcd35/storage-provisioner/1.log: no such file or directory"
	 output: "\n** stderr ** \ntime=\"2021-10-19T23:32:09Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_storage-provisioner_4de9e5f9-67b1-460f-a461-9b4aba1dcd35/storage-provisioner/1.log\\\": lstat /var/log/pods/kube-system_storage-provisioner_4de9e5f9-67b1-460f-a461-9b4aba1dcd35/storage-provisioner/1.log: no such file or directory\"\n\n** /stderr **"
	! unable to fetch logs for: coredns [0605b57aae11f41c9325d5a3e76e68ec1e64c6b38b67cf1ad6a33625b7c41695], etcd [26ea40ab7d60a4ab85a2df08d876563f5830188a6ff9ef57dbaf4f2664fd043e], kube-controller-manager [1f07e40be30ee71a5a218a0ac6cf3738d894ec7ee41418530e9be1f569667821], kube-proxy [4c1a817901d111851a5e18b472473200fc8ee0ad5b6a1a5601e17c7044df1a36], kube-scheduler [db3c26ef77419809adcad9f78497ebe74671d91829b7c1cc952129fd23bb5969], storage-provisioner [aad3662df36946428c02b080b9faefbda8bf1f6ba0635384593321e06fb7a49f]

                                                
                                                
** /stderr *****
--- FAIL: TestFunctional/serial/LogsFileCmd (1.00s)

                                                
                                    
x
+
TestScheduledStopUnix (89.34s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-20211019235100-6808 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-20211019235100-6808 --memory=2048 --driver=docker  --container-runtime=containerd: (43.470031832s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20211019235100-6808 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20211019235100-6808 -n scheduled-stop-20211019235100-6808
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20211019235100-6808 --schedule 8s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20211019235100-6808 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20211019235100-6808 -n scheduled-stop-20211019235100-6808
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20211019235100-6808
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20211019235100-6808 --schedule 5s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1019 23:52:11.995281    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/functional-20211019232952-6808/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20211019235100-6808
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-20211019235100-6808: exit status 3 (1.92272428s)

                                                
                                                
-- stdout --
	scheduled-stop-20211019235100-6808
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1019 23:52:24.243074  114677 status.go:374] failed to get storage capacity of /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	E1019 23:52:24.243104  114677 status.go:258] status error: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port

                                                
                                                
** /stderr **
scheduled_stop_test.go:209: minikube status: exit status 3

                                                
                                                
-- stdout --
	scheduled-stop-20211019235100-6808
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1019 23:52:24.243074  114677 status.go:374] failed to get storage capacity of /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	E1019 23:52:24.243104  114677 status.go:258] status error: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port

                                                
                                                
** /stderr **
panic.go:642: *** TestScheduledStopUnix FAILED at 2021-10-19 23:52:24.245008977 +0000 UTC m=+2181.761548478
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestScheduledStopUnix]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect scheduled-stop-20211019235100-6808
helpers_test.go:235: (dbg) docker inspect scheduled-stop-20211019235100-6808:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "08d73560d6fad783628e26cc72c8b215c45e9f66bed63894f4355d773ce6386b",
	        "Created": "2021-10-19T23:51:01.921716389Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 137,
	            "Error": "",
	            "StartedAt": "2021-10-19T23:51:02.414641738Z",
	            "FinishedAt": "2021-10-19T23:52:22.514257603Z"
	        },
	        "Image": "sha256:e2a6c047beddf8261495222adf87089305bbc18e350587b01ebe3725535b5871",
	        "ResolvConfPath": "/var/lib/docker/containers/08d73560d6fad783628e26cc72c8b215c45e9f66bed63894f4355d773ce6386b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/08d73560d6fad783628e26cc72c8b215c45e9f66bed63894f4355d773ce6386b/hostname",
	        "HostsPath": "/var/lib/docker/containers/08d73560d6fad783628e26cc72c8b215c45e9f66bed63894f4355d773ce6386b/hosts",
	        "LogPath": "/var/lib/docker/containers/08d73560d6fad783628e26cc72c8b215c45e9f66bed63894f4355d773ce6386b/08d73560d6fad783628e26cc72c8b215c45e9f66bed63894f4355d773ce6386b-json.log",
	        "Name": "/scheduled-stop-20211019235100-6808",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "scheduled-stop-20211019235100-6808:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "scheduled-stop-20211019235100-6808",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e8038abb17ca8f9ccc6f4060e22506d3be20d95069a1724cde4f9a6b0927e0b3-init/diff:/var/lib/docker/overlay2/c5f02e45537672705e6e2aba01e28e51c2d46e9e49272769c2467ad01e3a8798/diff:/var/lib/docker/overlay2/0ea8f075729b71baf4ccd823eb0c485ef5ef29bac0353c53e6065129629066ba/diff:/var/lib/docker/overlay2/88533b581a0009f254ebbf2a93788f6fea438104765bb8cd83e8242f6f007976/diff:/var/lib/docker/overlay2/14bd4aa948d82ba23130a19a3ed9b4398666fdb3b6961d42dfbea97f89c5cc8d/diff:/var/lib/docker/overlay2/c45d302040f8b5fa2a3863603e33a4a8945fc269c7c7d5b5fed4c0af74e1a318/diff:/var/lib/docker/overlay2/895bf38a811e31bbe8c73359865eb48ad82bbb675c072510dec392e208b25255/diff:/var/lib/docker/overlay2/888eda45177c40a5e8c507f72bfcd0e63d9fb3fbf18304e303b30f7858d75385/diff:/var/lib/docker/overlay2/1ba6be2d4915efd99d70838596bc306c3a5986d1687b7b5c82e3bcce08f2739a/diff:/var/lib/docker/overlay2/dba40492bd25835ae3cae4014121bd7743a6df6c6b6f568b50a45660d36da1ee/diff:/var/lib/docker/overlay2/f5e2e9
f4ecf2e4c242e9a2d0f40f87313ae18433e49a64f8e612bc21342ba35a/diff:/var/lib/docker/overlay2/55c3bd632609db28efc8e436188a29be66e40c950a2f07ba9a636848bcf8fe54/diff:/var/lib/docker/overlay2/e743e9a12912c11573135dbbc7915737001fecbbddc57c8609bb97c48e477ab5/diff:/var/lib/docker/overlay2/0d2ab28eea343e33cf288e808e3583a4ef3a2d0f0e152cb4b6a696fe2829aa37/diff:/var/lib/docker/overlay2/a9d9379f41d6df50597861e1eca41e6abb25962190c7c4bf07f9b6cecd398ab8/diff:/var/lib/docker/overlay2/246f3a84f3d47634be292753e6ab114ff0fdd6c19174f98d20d3631b985bd65a/diff:/var/lib/docker/overlay2/1555899ff98b2aefb432f4f7413f555e1419a0c7f1ff8906ff527987be55f98f/diff:/var/lib/docker/overlay2/2046d76f6555966732657251ae8f140fcb5c9ba7becf9f4ea2f38d02553b81a8/diff:/var/lib/docker/overlay2/45481a57311300993df1fbf7c1affd06d3f1b0b4f3d4786b38e20ed6419a53d4/diff:/var/lib/docker/overlay2/3e461aa0d9087cb03e88430b7acc263fdb5b7c9685b89e24b2e636c0b764ff95/diff:/var/lib/docker/overlay2/fd0a889ad13f00ae1f69b1916a5927927a49c38948b00d54db2145eed106176b/diff:/var/lib/d
ocker/overlay2/b9ba91322e89a003f05f9d5f11671de5af7cda4550b6f60c9d4ff90724e4e6f9/diff:/var/lib/docker/overlay2/e778c4cc31a31336a7ba77c6088ba8748d9c140197750d9e6c90b127c01e530b/diff:/var/lib/docker/overlay2/95fbf604ebdd8cc367f642230ac0a68cbc72994b41f02db95f354c4fdc75af24/diff:/var/lib/docker/overlay2/3f2110c7a6beb28d94b23699e2ca6363aa4ea63d43035d2e539e47fcd331a2ba/diff:/var/lib/docker/overlay2/1bf0ef7d7a45ab3ea27aa3d2f6b34dce6e59fc18e7e464e718352e8ea7029fd8/diff:/var/lib/docker/overlay2/23c70a1791e26b5ae01282413205d77ba4ecd95b3e170ea40176e856715a9359/diff:/var/lib/docker/overlay2/d830811487929eab10415090b3d97a6d7865d450267b84aa4184392814d8b1df/diff:/var/lib/docker/overlay2/b36fdfbf0198d3321dcce07dba3a6910f2cbe2dc276d6530613f56cd94afcc31/diff:/var/lib/docker/overlay2/cc70ad4a43336c6e67e06e2c9e0a2f30140594f1db80424144ea2b077c4ab1af/diff:/var/lib/docker/overlay2/d728e7526151cbb5fa43fbde71e65b654ce3b44983512cd2e690c8798d0d3ef8/diff:/var/lib/docker/overlay2/d701d06f30bc941260abb1757ee832c9aa2195a14f1552bd9d4baf5e614
4d5ad/diff:/var/lib/docker/overlay2/6bd5ed3108886521103186b5318e03a8d90f0dbc80d27990929d83cd889e533d/diff:/var/lib/docker/overlay2/e27df6450133558da7555ef518af1a37ae6a07ce47c6767fec076ca914b56d50/diff:/var/lib/docker/overlay2/f820866aa600bd3aaac0b53c24ed1df712be2f6192e572cb551d67d8ed5f2ae8/diff:/var/lib/docker/overlay2/867ec5e44c5a5ca3a4bbcb5e3ebe23023fd7c7f42cc297619dd37fa19e58a4ce/diff:/var/lib/docker/overlay2/1de77d9dd95b7ad0b72d0e674dbf02cf40000662dc54842d9d003febd75e9183/diff:/var/lib/docker/overlay2/8249f5dae9e196bbeeca2cec8401a9bd3dc73ea9eb3884270fea296047dc72e1/diff:/var/lib/docker/overlay2/d1a43bd2c1485a2051100b28c50ca4afb530e7a9cace2b7ed1bb19098a8b1b6c/diff:/var/lib/docker/overlay2/e5626256f4126d2d314b1737c78f12ceabf819f05f933b8539d23c83ed360571/diff:/var/lib/docker/overlay2/0e28b1b6d42bc8ec33754e6a4d94556573199f71a1745d89b48ecf4e53c4b9d7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e8038abb17ca8f9ccc6f4060e22506d3be20d95069a1724cde4f9a6b0927e0b3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e8038abb17ca8f9ccc6f4060e22506d3be20d95069a1724cde4f9a6b0927e0b3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e8038abb17ca8f9ccc6f4060e22506d3be20d95069a1724cde4f9a6b0927e0b3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "scheduled-stop-20211019235100-6808",
	                "Source": "/var/lib/docker/volumes/scheduled-stop-20211019235100-6808/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "scheduled-stop-20211019235100-6808",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.27-1633027942-12081@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "scheduled-stop-20211019235100-6808",
	                "name.minikube.sigs.k8s.io": "scheduled-stop-20211019235100-6808",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3212feede3d6766c276132b338f38e240842d35e64448651adc30621086b4045",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {},
	            "SandboxKey": "/var/run/docker/netns/3212feede3d6",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "scheduled-stop-20211019235100-6808": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "08d73560d6fa"
	                    ],
	                    "NetworkID": "66b7bbfd40669ebc6cbcf6606308b034b4959264457be01d7b3fb74f595ab0db",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20211019235100-6808 -n scheduled-stop-20211019235100-6808
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20211019235100-6808 -n scheduled-stop-20211019235100-6808: exit status 7 (90.395421ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-20211019235100-6808" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-20211019235100-6808" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-20211019235100-6808
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-20211019235100-6808: (5.387592748s)
--- FAIL: TestScheduledStopUnix (89.34s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (19.05s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:108: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20211019235243-6808 --alsologtostderr -v=5

                                                
                                                
=== CONT  TestPause/serial/PauseAgain
pause_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-20211019235243-6808 --alsologtostderr -v=5: exit status 80 (9.516038289s)

                                                
                                                
-- stdout --
	* Pausing node pause-20211019235243-6808 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 23:54:20.219237  136454 out.go:297] Setting OutFile to fd 1 ...
	I1019 23:54:20.219400  136454 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1019 23:54:20.219415  136454 out.go:310] Setting ErrFile to fd 2...
	I1019 23:54:20.219421  136454 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1019 23:54:20.220055  136454 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/bin
	I1019 23:54:20.220461  136454 out.go:304] Setting JSON to false
	I1019 23:54:20.220486  136454 mustload.go:65] Loading cluster: pause-20211019235243-6808
	I1019 23:54:20.220870  136454 config.go:176] Loaded profile config "pause-20211019235243-6808": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.2
	I1019 23:54:20.221330  136454 cli_runner.go:115] Run: docker container inspect pause-20211019235243-6808 --format={{.State.Status}}
	I1019 23:54:20.273876  136454 host.go:66] Checking if "pause-20211019235243-6808" exists ...
	I1019 23:54:20.274252  136454 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1019 23:54:20.367486  136454 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:188 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:true NGoroutines:67 SystemTime:2021-10-19 23:54:20.31567128 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1019 23:54:20.368172  136454 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.27-1633027942-12081@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime:docker cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false)
host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.99.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/12081/minikube-v1.23.1-1633115168-12081.iso https://github.com/kubernetes/minikube/releases/download/v1.23.1-1633115168-12081/minikube-v1.23.1-1633115168-12081.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.23.1-1633115168-12081.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool
=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-20211019235243-6808 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1019 23:54:20.370818  136454 out.go:176] * Pausing node pause-20211019235243-6808 ... 
	I1019 23:54:20.370845  136454 host.go:66] Checking if "pause-20211019235243-6808" exists ...
	I1019 23:54:20.372141  136454 ssh_runner.go:152] Run: systemctl --version
	I1019 23:54:20.372175  136454 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211019235243-6808
	I1019 23:54:20.416479  136454 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32887 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/pause-20211019235243-6808/id_rsa Username:docker}
	I1019 23:54:20.500190  136454 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
	I1019 23:54:20.511068  136454 pause.go:50] kubelet running: true
	I1019 23:54:20.511135  136454 ssh_runner.go:152] Run: sudo systemctl disable --now kubelet
	I1019 23:54:25.325883  136454 ssh_runner.go:192] Completed: sudo systemctl disable --now kubelet: (4.814727525s)
	I1019 23:54:27.310736  136454 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I1019 23:54:27.310834  136454 ssh_runner.go:152] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 23:54:27.391691  136454 cri.go:76] found id: "7141c9d1bc5b52d1c9ffb0a56bc1ea0894d5137cde80ff31a3e5d2e881ad0ddf"
	I1019 23:54:27.391720  136454 cri.go:76] found id: "67717a61116e33321ebcbc26bb1066b02d77abca490421cdcda28484901dac38"
	I1019 23:54:27.391728  136454 cri.go:76] found id: "8e118a6817318c419866d57e4348e3019c5fabe15768ab92684ad1eb15588578"
	I1019 23:54:27.391735  136454 cri.go:76] found id: "5f570741dc1021db804ad8506cb914c8399cc863b2f79d81a1f5c9284ed89103"
	I1019 23:54:27.391739  136454 cri.go:76] found id: "c17f0c5af3c7c5c2a0f2ed0bf2a1d0654a71550afd751c6a0e7f736234423cef"
	I1019 23:54:27.391744  136454 cri.go:76] found id: "f7fd5d2b9867844d78c46d232049ec663270016db2809ec5dc7cff7b64ed42bb"
	I1019 23:54:27.391748  136454 cri.go:76] found id: "7ff3672f9c090d8f1094f7cedfac0961c84921065abf3237aa7088c62e868465"
	I1019 23:54:27.391753  136454 cri.go:76] found id: "5b9dd31e0a54a2ef258e3644d6aeb964a53c1c5cbea1f4c81d34ec48a98fd995"
	I1019 23:54:27.391757  136454 cri.go:76] found id: ""
	I1019 23:54:27.391795  136454 ssh_runner.go:152] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1019 23:54:27.431140  136454 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"2255d1c3ffe74e31933203aa85911eb6a1e390f13b31255cdc8b5db10b84b823","pid":1738,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2255d1c3ffe74e31933203aa85911eb6a1e390f13b31255cdc8b5db10b84b823","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2255d1c3ffe74e31933203aa85911eb6a1e390f13b31255cdc8b5db10b84b823/rootfs","created":"2021-10-19T23:53:51.968041229Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"2255d1c3ffe74e31933203aa85911eb6a1e390f13b31255cdc8b5db10b84b823","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-4h4jt_683931b6-6d3e-46c6-9939-7e6148247459"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"25169287aa5e57ecc19e6ec932c70c80d1a289f0b60ef6d629561d4c2229c466","pid":1021,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/25169287aa5e57ecc19e6ec932c70c80d1a289f
0b60ef6d629561d4c2229c466","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/25169287aa5e57ecc19e6ec932c70c80d1a289f0b60ef6d629561d4c2229c466/rootfs","created":"2021-10-19T23:53:18.879982422Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"25169287aa5e57ecc19e6ec932c70c80d1a289f0b60ef6d629561d4c2229c466","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-pause-20211019235243-6808_cde52a1a4ece8bd483c0e7444e3529e8"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2f0e26a91659a57d65733003789665a4e8dd2b0c4ecd756466878a407ef14c8b","pid":2010,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2f0e26a91659a57d65733003789665a4e8dd2b0c4ecd756466878a407ef14c8b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2f0e26a91659a57d65733003789665a4e8dd2b0c4ecd756466878a407ef14c8b/rootfs","created":"2021-10-19T23:53:59.57200209Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.
kubernetes.cri.sandbox-id":"2f0e26a91659a57d65733003789665a4e8dd2b0c4ecd756466878a407ef14c8b","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-78fcd69978-whpds_02502037-a01b-4cfb-a216-57c9601390a2"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"51bfb259838f91d436d05cde0f107d782d01b2a4b853169a55b418f6ece5b01e","pid":1013,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/51bfb259838f91d436d05cde0f107d782d01b2a4b853169a55b418f6ece5b01e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/51bfb259838f91d436d05cde0f107d782d01b2a4b853169a55b418f6ece5b01e/rootfs","created":"2021-10-19T23:53:18.880048379Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"51bfb259838f91d436d05cde0f107d782d01b2a4b853169a55b418f6ece5b01e","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-pause-20211019235243-6808_8569f8510d5237aa9eb0268e6d83b021"},"owner":"root"},{"ociVersion":"1.
0.2-dev","id":"5b9dd31e0a54a2ef258e3644d6aeb964a53c1c5cbea1f4c81d34ec48a98fd995","pid":1143,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5b9dd31e0a54a2ef258e3644d6aeb964a53c1c5cbea1f4c81d34ec48a98fd995","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5b9dd31e0a54a2ef258e3644d6aeb964a53c1c5cbea1f4c81d34ec48a98fd995/rootfs","created":"2021-10-19T23:53:19.18003269Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"b6cfc672745faa9120e23b412779ca56fa92433ee42268b3deb5fc5de35be6d5"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5f570741dc1021db804ad8506cb914c8399cc863b2f79d81a1f5c9284ed89103","pid":1778,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5f570741dc1021db804ad8506cb914c8399cc863b2f79d81a1f5c9284ed89103","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5f570741dc1021db804ad8506cb914c8399cc863b2f79d81a1f5c9284ed89103/ro
otfs","created":"2021-10-19T23:53:52.124094259Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"2255d1c3ffe74e31933203aa85911eb6a1e390f13b31255cdc8b5db10b84b823"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"67717a61116e33321ebcbc26bb1066b02d77abca490421cdcda28484901dac38","pid":2043,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/67717a61116e33321ebcbc26bb1066b02d77abca490421cdcda28484901dac38","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/67717a61116e33321ebcbc26bb1066b02d77abca490421cdcda28484901dac38/rootfs","created":"2021-10-19T23:53:59.824044588Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"2f0e26a91659a57d65733003789665a4e8dd2b0c4ecd756466878a407ef14c8b"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7141c9d1bc5b52d1c9ffb0a56bc1ea0894d5137cde80ff31a3e5d2e881ad
0ddf","pid":2614,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7141c9d1bc5b52d1c9ffb0a56bc1ea0894d5137cde80ff31a3e5d2e881ad0ddf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7141c9d1bc5b52d1c9ffb0a56bc1ea0894d5137cde80ff31a3e5d2e881ad0ddf/rootfs","created":"2021-10-19T23:54:17.156030897Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"f94a7deeaee11858fa57001e5b2b78c83791f3cbc53624b6f2e1831215e60d60"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7ff3672f9c090d8f1094f7cedfac0961c84921065abf3237aa7088c62e868465","pid":1117,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7ff3672f9c090d8f1094f7cedfac0961c84921065abf3237aa7088c62e868465","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7ff3672f9c090d8f1094f7cedfac0961c84921065abf3237aa7088c62e868465/rootfs","created":"2021-10-19T23:53:19.120007913Z","annotatio
ns":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"995a9108c285470fcc638bc91ce0967c1f41cc346768ec67a0f9a3537a80e620"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8e118a6817318c419866d57e4348e3019c5fabe15768ab92684ad1eb15588578","pid":1864,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8e118a6817318c419866d57e4348e3019c5fabe15768ab92684ad1eb15588578","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8e118a6817318c419866d57e4348e3019c5fabe15768ab92684ad1eb15588578/rootfs","created":"2021-10-19T23:53:52.508005914Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"f431cd5d317c1bd3cf53ba62b64e8707a6a18c18f3afc1aa01b0c49649113beb"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"995a9108c285470fcc638bc91ce0967c1f41cc346768ec67a0f9a3537a80e620","pid":1000,"status":"running","bundle":"/run/
containerd/io.containerd.runtime.v2.task/k8s.io/995a9108c285470fcc638bc91ce0967c1f41cc346768ec67a0f9a3537a80e620","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/995a9108c285470fcc638bc91ce0967c1f41cc346768ec67a0f9a3537a80e620/rootfs","created":"2021-10-19T23:53:18.879980395Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"995a9108c285470fcc638bc91ce0967c1f41cc346768ec67a0f9a3537a80e620","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-pause-20211019235243-6808_c04a89ac537d5c77f40d367f319785a8"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b6cfc672745faa9120e23b412779ca56fa92433ee42268b3deb5fc5de35be6d5","pid":998,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b6cfc672745faa9120e23b412779ca56fa92433ee42268b3deb5fc5de35be6d5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b6cfc672745faa9120e23b412779ca56fa92433ee42268b3deb5fc5de35be6d5/rootfs","created":"2021-10-1
9T23:53:18.879998404Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"b6cfc672745faa9120e23b412779ca56fa92433ee42268b3deb5fc5de35be6d5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-pause-20211019235243-6808_ee9b4e0bead94b40fae13e6ee192e478"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c17f0c5af3c7c5c2a0f2ed0bf2a1d0654a71550afd751c6a0e7f736234423cef","pid":1155,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c17f0c5af3c7c5c2a0f2ed0bf2a1d0654a71550afd751c6a0e7f736234423cef","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c17f0c5af3c7c5c2a0f2ed0bf2a1d0654a71550afd751c6a0e7f736234423cef/rootfs","created":"2021-10-19T23:53:19.180054826Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"51bfb259838f91d436d05cde0f107d782d01b2a4b853169a55b418f6ece5b01e"},"owner":"root"},{"ociVersion":"1.0
.2-dev","id":"f431cd5d317c1bd3cf53ba62b64e8707a6a18c18f3afc1aa01b0c49649113beb","pid":1745,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f431cd5d317c1bd3cf53ba62b64e8707a6a18c18f3afc1aa01b0c49649113beb","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f431cd5d317c1bd3cf53ba62b64e8707a6a18c18f3afc1aa01b0c49649113beb/rootfs","created":"2021-10-19T23:53:52.156049702Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"f431cd5d317c1bd3cf53ba62b64e8707a6a18c18f3afc1aa01b0c49649113beb","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-hx6rh_3b1fccbe-8c6c-4901-9b87-08a0c706c2ec"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f7fd5d2b9867844d78c46d232049ec663270016db2809ec5dc7cff7b64ed42bb","pid":1156,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f7fd5d2b9867844d78c46d232049ec663270016db2809ec5dc7cff7b64ed42bb","rootfs":"/run/containerd/io.containerd.runtime.v2.ta
sk/k8s.io/f7fd5d2b9867844d78c46d232049ec663270016db2809ec5dc7cff7b64ed42bb/rootfs","created":"2021-10-19T23:53:19.180098798Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"25169287aa5e57ecc19e6ec932c70c80d1a289f0b60ef6d629561d4c2229c466"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f94a7deeaee11858fa57001e5b2b78c83791f3cbc53624b6f2e1831215e60d60","pid":2578,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f94a7deeaee11858fa57001e5b2b78c83791f3cbc53624b6f2e1831215e60d60","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f94a7deeaee11858fa57001e5b2b78c83791f3cbc53624b6f2e1831215e60d60/rootfs","created":"2021-10-19T23:54:16.930483709Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"f94a7deeaee11858fa57001e5b2b78c83791f3cbc53624b6f2e1831215e60d60","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storag
e-provisioner_7556b1b7-1180-43ef-8ff0-ee10aaa82bfe"},"owner":"root"}]
	I1019 23:54:27.431370  136454 cri.go:113] list returned 16 containers
	I1019 23:54:27.431383  136454 cri.go:116] container: {ID:2255d1c3ffe74e31933203aa85911eb6a1e390f13b31255cdc8b5db10b84b823 Status:running}
	I1019 23:54:27.431408  136454 cri.go:118] skipping 2255d1c3ffe74e31933203aa85911eb6a1e390f13b31255cdc8b5db10b84b823 - not in ps
	I1019 23:54:27.431419  136454 cri.go:116] container: {ID:25169287aa5e57ecc19e6ec932c70c80d1a289f0b60ef6d629561d4c2229c466 Status:running}
	I1019 23:54:27.431430  136454 cri.go:118] skipping 25169287aa5e57ecc19e6ec932c70c80d1a289f0b60ef6d629561d4c2229c466 - not in ps
	I1019 23:54:27.431437  136454 cri.go:116] container: {ID:2f0e26a91659a57d65733003789665a4e8dd2b0c4ecd756466878a407ef14c8b Status:running}
	I1019 23:54:27.431447  136454 cri.go:118] skipping 2f0e26a91659a57d65733003789665a4e8dd2b0c4ecd756466878a407ef14c8b - not in ps
	I1019 23:54:27.431455  136454 cri.go:116] container: {ID:51bfb259838f91d436d05cde0f107d782d01b2a4b853169a55b418f6ece5b01e Status:running}
	I1019 23:54:27.431465  136454 cri.go:118] skipping 51bfb259838f91d436d05cde0f107d782d01b2a4b853169a55b418f6ece5b01e - not in ps
	I1019 23:54:27.431473  136454 cri.go:116] container: {ID:5b9dd31e0a54a2ef258e3644d6aeb964a53c1c5cbea1f4c81d34ec48a98fd995 Status:running}
	I1019 23:54:27.431481  136454 cri.go:116] container: {ID:5f570741dc1021db804ad8506cb914c8399cc863b2f79d81a1f5c9284ed89103 Status:running}
	I1019 23:54:27.431486  136454 cri.go:116] container: {ID:67717a61116e33321ebcbc26bb1066b02d77abca490421cdcda28484901dac38 Status:running}
	I1019 23:54:27.431495  136454 cri.go:116] container: {ID:7141c9d1bc5b52d1c9ffb0a56bc1ea0894d5137cde80ff31a3e5d2e881ad0ddf Status:running}
	I1019 23:54:27.431507  136454 cri.go:116] container: {ID:7ff3672f9c090d8f1094f7cedfac0961c84921065abf3237aa7088c62e868465 Status:running}
	I1019 23:54:27.431517  136454 cri.go:116] container: {ID:8e118a6817318c419866d57e4348e3019c5fabe15768ab92684ad1eb15588578 Status:running}
	I1019 23:54:27.431527  136454 cri.go:116] container: {ID:995a9108c285470fcc638bc91ce0967c1f41cc346768ec67a0f9a3537a80e620 Status:running}
	I1019 23:54:27.431536  136454 cri.go:118] skipping 995a9108c285470fcc638bc91ce0967c1f41cc346768ec67a0f9a3537a80e620 - not in ps
	I1019 23:54:27.431544  136454 cri.go:116] container: {ID:b6cfc672745faa9120e23b412779ca56fa92433ee42268b3deb5fc5de35be6d5 Status:running}
	I1019 23:54:27.431550  136454 cri.go:118] skipping b6cfc672745faa9120e23b412779ca56fa92433ee42268b3deb5fc5de35be6d5 - not in ps
	I1019 23:54:27.431558  136454 cri.go:116] container: {ID:c17f0c5af3c7c5c2a0f2ed0bf2a1d0654a71550afd751c6a0e7f736234423cef Status:running}
	I1019 23:54:27.431567  136454 cri.go:116] container: {ID:f431cd5d317c1bd3cf53ba62b64e8707a6a18c18f3afc1aa01b0c49649113beb Status:running}
	I1019 23:54:27.431576  136454 cri.go:118] skipping f431cd5d317c1bd3cf53ba62b64e8707a6a18c18f3afc1aa01b0c49649113beb - not in ps
	I1019 23:54:27.431585  136454 cri.go:116] container: {ID:f7fd5d2b9867844d78c46d232049ec663270016db2809ec5dc7cff7b64ed42bb Status:running}
	I1019 23:54:27.431595  136454 cri.go:116] container: {ID:f94a7deeaee11858fa57001e5b2b78c83791f3cbc53624b6f2e1831215e60d60 Status:running}
	I1019 23:54:27.431605  136454 cri.go:118] skipping f94a7deeaee11858fa57001e5b2b78c83791f3cbc53624b6f2e1831215e60d60 - not in ps
	I1019 23:54:27.431650  136454 ssh_runner.go:152] Run: sudo runc --root /run/containerd/runc/k8s.io pause 5b9dd31e0a54a2ef258e3644d6aeb964a53c1c5cbea1f4c81d34ec48a98fd995
	I1019 23:54:28.617828  136454 out.go:176] 
	W1019 23:54:28.618023  136454 out.go:241] X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 5b9dd31e0a54a2ef258e3644d6aeb964a53c1c5cbea1f4c81d34ec48a98fd995: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-10-19T23:54:27Z" level=error msg="unable to freeze"
	
	X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 5b9dd31e0a54a2ef258e3644d6aeb964a53c1c5cbea1f4c81d34ec48a98fd995: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-10-19T23:54:27Z" level=error msg="unable to freeze"
	
	W1019 23:54:28.618040  136454 out.go:241] * 
	* 
	W1019 23:54:28.620815  136454 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 23:54:29.651035  136454 out.go:176] 

                                                
                                                
** /stderr **
pause_test.go:110: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-20211019235243-6808 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/PauseAgain]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20211019235243-6808
helpers_test.go:235: (dbg) docker inspect pause-20211019235243-6808:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ca64042ca0a992e0db24bbcb8efadad8b86c99ff454893a54a4f418cfbccd996",
	        "Created": "2021-10-19T23:52:45.024014891Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 119055,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-10-19T23:52:46.857434767Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e2a6c047beddf8261495222adf87089305bbc18e350587b01ebe3725535b5871",
	        "ResolvConfPath": "/var/lib/docker/containers/ca64042ca0a992e0db24bbcb8efadad8b86c99ff454893a54a4f418cfbccd996/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ca64042ca0a992e0db24bbcb8efadad8b86c99ff454893a54a4f418cfbccd996/hostname",
	        "HostsPath": "/var/lib/docker/containers/ca64042ca0a992e0db24bbcb8efadad8b86c99ff454893a54a4f418cfbccd996/hosts",
	        "LogPath": "/var/lib/docker/containers/ca64042ca0a992e0db24bbcb8efadad8b86c99ff454893a54a4f418cfbccd996/ca64042ca0a992e0db24bbcb8efadad8b86c99ff454893a54a4f418cfbccd996-json.log",
	        "Name": "/pause-20211019235243-6808",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-20211019235243-6808:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-20211019235243-6808",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/164e112dc8082320ff66e29a5bbd10345b283e77bda8403b0d1eff66c19631ed-init/diff:/var/lib/docker/overlay2/c5f02e45537672705e6e2aba01e28e51c2d46e9e49272769c2467ad01e3a8798/diff:/var/lib/docker/overlay2/0ea8f075729b71baf4ccd823eb0c485ef5ef29bac0353c53e6065129629066ba/diff:/var/lib/docker/overlay2/88533b581a0009f254ebbf2a93788f6fea438104765bb8cd83e8242f6f007976/diff:/var/lib/docker/overlay2/14bd4aa948d82ba23130a19a3ed9b4398666fdb3b6961d42dfbea97f89c5cc8d/diff:/var/lib/docker/overlay2/c45d302040f8b5fa2a3863603e33a4a8945fc269c7c7d5b5fed4c0af74e1a318/diff:/var/lib/docker/overlay2/895bf38a811e31bbe8c73359865eb48ad82bbb675c072510dec392e208b25255/diff:/var/lib/docker/overlay2/888eda45177c40a5e8c507f72bfcd0e63d9fb3fbf18304e303b30f7858d75385/diff:/var/lib/docker/overlay2/1ba6be2d4915efd99d70838596bc306c3a5986d1687b7b5c82e3bcce08f2739a/diff:/var/lib/docker/overlay2/dba40492bd25835ae3cae4014121bd7743a6df6c6b6f568b50a45660d36da1ee/diff:/var/lib/docker/overlay2/f5e2e9
f4ecf2e4c242e9a2d0f40f87313ae18433e49a64f8e612bc21342ba35a/diff:/var/lib/docker/overlay2/55c3bd632609db28efc8e436188a29be66e40c950a2f07ba9a636848bcf8fe54/diff:/var/lib/docker/overlay2/e743e9a12912c11573135dbbc7915737001fecbbddc57c8609bb97c48e477ab5/diff:/var/lib/docker/overlay2/0d2ab28eea343e33cf288e808e3583a4ef3a2d0f0e152cb4b6a696fe2829aa37/diff:/var/lib/docker/overlay2/a9d9379f41d6df50597861e1eca41e6abb25962190c7c4bf07f9b6cecd398ab8/diff:/var/lib/docker/overlay2/246f3a84f3d47634be292753e6ab114ff0fdd6c19174f98d20d3631b985bd65a/diff:/var/lib/docker/overlay2/1555899ff98b2aefb432f4f7413f555e1419a0c7f1ff8906ff527987be55f98f/diff:/var/lib/docker/overlay2/2046d76f6555966732657251ae8f140fcb5c9ba7becf9f4ea2f38d02553b81a8/diff:/var/lib/docker/overlay2/45481a57311300993df1fbf7c1affd06d3f1b0b4f3d4786b38e20ed6419a53d4/diff:/var/lib/docker/overlay2/3e461aa0d9087cb03e88430b7acc263fdb5b7c9685b89e24b2e636c0b764ff95/diff:/var/lib/docker/overlay2/fd0a889ad13f00ae1f69b1916a5927927a49c38948b00d54db2145eed106176b/diff:/var/lib/d
ocker/overlay2/b9ba91322e89a003f05f9d5f11671de5af7cda4550b6f60c9d4ff90724e4e6f9/diff:/var/lib/docker/overlay2/e778c4cc31a31336a7ba77c6088ba8748d9c140197750d9e6c90b127c01e530b/diff:/var/lib/docker/overlay2/95fbf604ebdd8cc367f642230ac0a68cbc72994b41f02db95f354c4fdc75af24/diff:/var/lib/docker/overlay2/3f2110c7a6beb28d94b23699e2ca6363aa4ea63d43035d2e539e47fcd331a2ba/diff:/var/lib/docker/overlay2/1bf0ef7d7a45ab3ea27aa3d2f6b34dce6e59fc18e7e464e718352e8ea7029fd8/diff:/var/lib/docker/overlay2/23c70a1791e26b5ae01282413205d77ba4ecd95b3e170ea40176e856715a9359/diff:/var/lib/docker/overlay2/d830811487929eab10415090b3d97a6d7865d450267b84aa4184392814d8b1df/diff:/var/lib/docker/overlay2/b36fdfbf0198d3321dcce07dba3a6910f2cbe2dc276d6530613f56cd94afcc31/diff:/var/lib/docker/overlay2/cc70ad4a43336c6e67e06e2c9e0a2f30140594f1db80424144ea2b077c4ab1af/diff:/var/lib/docker/overlay2/d728e7526151cbb5fa43fbde71e65b654ce3b44983512cd2e690c8798d0d3ef8/diff:/var/lib/docker/overlay2/d701d06f30bc941260abb1757ee832c9aa2195a14f1552bd9d4baf5e614
4d5ad/diff:/var/lib/docker/overlay2/6bd5ed3108886521103186b5318e03a8d90f0dbc80d27990929d83cd889e533d/diff:/var/lib/docker/overlay2/e27df6450133558da7555ef518af1a37ae6a07ce47c6767fec076ca914b56d50/diff:/var/lib/docker/overlay2/f820866aa600bd3aaac0b53c24ed1df712be2f6192e572cb551d67d8ed5f2ae8/diff:/var/lib/docker/overlay2/867ec5e44c5a5ca3a4bbcb5e3ebe23023fd7c7f42cc297619dd37fa19e58a4ce/diff:/var/lib/docker/overlay2/1de77d9dd95b7ad0b72d0e674dbf02cf40000662dc54842d9d003febd75e9183/diff:/var/lib/docker/overlay2/8249f5dae9e196bbeeca2cec8401a9bd3dc73ea9eb3884270fea296047dc72e1/diff:/var/lib/docker/overlay2/d1a43bd2c1485a2051100b28c50ca4afb530e7a9cace2b7ed1bb19098a8b1b6c/diff:/var/lib/docker/overlay2/e5626256f4126d2d314b1737c78f12ceabf819f05f933b8539d23c83ed360571/diff:/var/lib/docker/overlay2/0e28b1b6d42bc8ec33754e6a4d94556573199f71a1745d89b48ecf4e53c4b9d7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/164e112dc8082320ff66e29a5bbd10345b283e77bda8403b0d1eff66c19631ed/merged",
	                "UpperDir": "/var/lib/docker/overlay2/164e112dc8082320ff66e29a5bbd10345b283e77bda8403b0d1eff66c19631ed/diff",
	                "WorkDir": "/var/lib/docker/overlay2/164e112dc8082320ff66e29a5bbd10345b283e77bda8403b0d1eff66c19631ed/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-20211019235243-6808",
	                "Source": "/var/lib/docker/volumes/pause-20211019235243-6808/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-20211019235243-6808",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.27-1633027942-12081@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-20211019235243-6808",
	                "name.minikube.sigs.k8s.io": "pause-20211019235243-6808",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c30954b550a8a0c7fccb8feca23da872fa02f79e0ed590ddb7cdcf6bac975f27",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32887"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32886"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32883"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32885"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32884"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c30954b550a8",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-20211019235243-6808": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "ca64042ca0a9"
	                    ],
	                    "NetworkID": "25cdaedb005cd1adf2c3e34e38db740b67fb4515aa837a06ae69830a0360464d",
	                    "EndpointID": "ff2efd15a8d6db9810ade83164075c4e794aff259445b744b6374f30856ece63",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-20211019235243-6808 -n pause-20211019235243-6808
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-20211019235243-6808 -n pause-20211019235243-6808: exit status 2 (1.964200535s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestPause/serial/PauseAgain FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/PauseAgain]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-20211019235243-6808 logs -n 25

                                                
                                                
=== CONT  TestPause/serial/PauseAgain
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-20211019235243-6808 logs -n 25: (4.822309294s)
helpers_test.go:252: TestPause/serial/PauseAgain logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                   Args                   |                 Profile                  |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| start   | -p                                       | multinode-20211019233815-6808-m03        | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:47:59 UTC | Tue, 19 Oct 2021 23:48:42 UTC |
	|         | multinode-20211019233815-6808-m03        |                                          |         |         |                               |                               |
	|         | --driver=docker                          |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| delete  | -p                                       | multinode-20211019233815-6808-m03        | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:48:42 UTC | Tue, 19 Oct 2021 23:48:45 UTC |
	|         | multinode-20211019233815-6808-m03        |                                          |         |         |                               |                               |
	| delete  | -p                                       | multinode-20211019233815-6808            | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:48:45 UTC | Tue, 19 Oct 2021 23:48:50 UTC |
	|         | multinode-20211019233815-6808            |                                          |         |         |                               |                               |
	| start   | -p                                       | test-preload-20211019234850-6808         | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:48:50 UTC | Tue, 19 Oct 2021 23:50:15 UTC |
	|         | test-preload-20211019234850-6808         |                                          |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr          |                                          |         |         |                               |                               |
	|         | --wait=true --preload=false              |                                          |         |         |                               |                               |
	|         | --driver=docker                          |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	|         | --kubernetes-version=v1.17.0             |                                          |         |         |                               |                               |
	| ssh     | -p                                       | test-preload-20211019234850-6808         | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:50:15 UTC | Tue, 19 Oct 2021 23:50:16 UTC |
	|         | test-preload-20211019234850-6808         |                                          |         |         |                               |                               |
	|         | -- sudo crictl pull busybox              |                                          |         |         |                               |                               |
	| start   | -p                                       | test-preload-20211019234850-6808         | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:50:17 UTC | Tue, 19 Oct 2021 23:50:57 UTC |
	|         | test-preload-20211019234850-6808         |                                          |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr          |                                          |         |         |                               |                               |
	|         | -v=1 --wait=true --driver=docker         |                                          |         |         |                               |                               |
	|         |  --container-runtime=containerd          |                                          |         |         |                               |                               |
	|         | --kubernetes-version=v1.17.3             |                                          |         |         |                               |                               |
	| ssh     | -p                                       | test-preload-20211019234850-6808         | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:50:57 UTC | Tue, 19 Oct 2021 23:50:57 UTC |
	|         | test-preload-20211019234850-6808         |                                          |         |         |                               |                               |
	|         | -- sudo crictl image ls                  |                                          |         |         |                               |                               |
	| delete  | -p                                       | test-preload-20211019234850-6808         | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:50:57 UTC | Tue, 19 Oct 2021 23:51:00 UTC |
	|         | test-preload-20211019234850-6808         |                                          |         |         |                               |                               |
	| start   | -p                                       | scheduled-stop-20211019235100-6808       | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:51:00 UTC | Tue, 19 Oct 2021 23:51:43 UTC |
	|         | scheduled-stop-20211019235100-6808       |                                          |         |         |                               |                               |
	|         | --memory=2048 --driver=docker            |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| stop    | -p                                       | scheduled-stop-20211019235100-6808       | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:51:44 UTC | Tue, 19 Oct 2021 23:51:44 UTC |
	|         | scheduled-stop-20211019235100-6808       |                                          |         |         |                               |                               |
	|         | --cancel-scheduled                       |                                          |         |         |                               |                               |
	| stop    | -p                                       | scheduled-stop-20211019235100-6808       | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:51:57 UTC | Tue, 19 Oct 2021 23:52:22 UTC |
	|         | scheduled-stop-20211019235100-6808       |                                          |         |         |                               |                               |
	|         | --schedule 5s                            |                                          |         |         |                               |                               |
	| delete  | -p                                       | scheduled-stop-20211019235100-6808       | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:52:24 UTC | Tue, 19 Oct 2021 23:52:29 UTC |
	|         | scheduled-stop-20211019235100-6808       |                                          |         |         |                               |                               |
	| delete  | -p                                       | insufficient-storage-20211019235229-6808 | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:52:36 UTC | Tue, 19 Oct 2021 23:52:43 UTC |
	|         | insufficient-storage-20211019235229-6808 |                                          |         |         |                               |                               |
	| start   | -p                                       | force-systemd-env-20211019235243-6808    | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:52:43 UTC | Tue, 19 Oct 2021 23:53:50 UTC |
	|         | force-systemd-env-20211019235243-6808    |                                          |         |         |                               |                               |
	|         | --memory=2048 --alsologtostderr          |                                          |         |         |                               |                               |
	|         | -v=5 --driver=docker                     |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| -p      | force-systemd-env-20211019235243-6808    | force-systemd-env-20211019235243-6808    | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:53:50 UTC | Tue, 19 Oct 2021 23:53:51 UTC |
	|         | ssh cat /etc/containerd/config.toml      |                                          |         |         |                               |                               |
	| delete  | -p                                       | force-systemd-env-20211019235243-6808    | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:53:51 UTC | Tue, 19 Oct 2021 23:53:54 UTC |
	|         | force-systemd-env-20211019235243-6808    |                                          |         |         |                               |                               |
	| delete  | -p kubenet-20211019235354-6808           | kubenet-20211019235354-6808              | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:53:54 UTC | Tue, 19 Oct 2021 23:53:54 UTC |
	| delete  | -p flannel-20211019235354-6808           | flannel-20211019235354-6808              | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:53:54 UTC | Tue, 19 Oct 2021 23:53:55 UTC |
	| delete  | -p false-20211019235355-6808             | false-20211019235355-6808                | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:53:55 UTC | Tue, 19 Oct 2021 23:53:56 UTC |
	| start   | -p pause-20211019235243-6808             | pause-20211019235243-6808                | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:52:43 UTC | Tue, 19 Oct 2021 23:54:01 UTC |
	|         | --memory=2048                            |                                          |         |         |                               |                               |
	|         | --install-addons=false                   |                                          |         |         |                               |                               |
	|         | --wait=all --driver=docker               |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| start   | -p                                       | offline-containerd-20211019235243-6808   | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:52:43 UTC | Tue, 19 Oct 2021 23:54:16 UTC |
	|         | offline-containerd-20211019235243-6808   |                                          |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --memory=2048     |                                          |         |         |                               |                               |
	|         | --wait=true --driver=docker              |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| start   | -p pause-20211019235243-6808             | pause-20211019235243-6808                | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:54:01 UTC | Tue, 19 Oct 2021 23:54:17 UTC |
	|         | --alsologtostderr                        |                                          |         |         |                               |                               |
	|         | -v=1 --driver=docker                     |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| pause   | -p pause-20211019235243-6808             | pause-20211019235243-6808                | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:54:17 UTC | Tue, 19 Oct 2021 23:54:18 UTC |
	|         | --alsologtostderr -v=5                   |                                          |         |         |                               |                               |
	| unpause | -p pause-20211019235243-6808             | pause-20211019235243-6808                | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:54:19 UTC | Tue, 19 Oct 2021 23:54:20 UTC |
	|         | --alsologtostderr -v=5                   |                                          |         |         |                               |                               |
	| delete  | -p                                       | offline-containerd-20211019235243-6808   | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:54:16 UTC | Tue, 19 Oct 2021 23:54:20 UTC |
	|         | offline-containerd-20211019235243-6808   |                                          |         |         |                               |                               |
	|---------|------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/10/19 23:54:01
	Running on machine: debian-jenkins-agent-14
	Binary: Built with gc go1.17.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 23:54:01.939165  131599 out.go:297] Setting OutFile to fd 1 ...
	I1019 23:54:01.939264  131599 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1019 23:54:01.939273  131599 out.go:310] Setting ErrFile to fd 2...
	I1019 23:54:01.939277  131599 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1019 23:54:01.939396  131599 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/bin
	I1019 23:54:01.939665  131599 out.go:304] Setting JSON to false
	I1019 23:54:01.986404  131599 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":5804,"bootTime":1634681838,"procs":284,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I1019 23:54:01.986517  131599 start.go:121] virtualization: kvm guest
	I1019 23:54:01.988350  131599 out.go:176] * [pause-20211019235243-6808] minikube v1.23.2 on Debian 9.13 (kvm/amd64)
	I1019 23:54:01.988470  131599 notify.go:169] Checking for updates...
	I1019 23:54:01.989782  131599 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/kubeconfig
	I1019 23:54:01.991392  131599 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 23:54:01.992795  131599 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube
	I1019 23:53:58.457003  117106 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 23:53:58.957175  117106 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 23:53:59.457835  117106 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 23:53:59.957222  117106 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 23:54:00.457498  117106 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 23:54:00.957578  117106 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 23:54:01.457652  117106 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 23:54:01.561601  117106 kubeadm.go:985] duration metric: took 12.289873518s to wait for elevateKubeSystemPrivileges.
	I1019 23:54:01.561634  117106 kubeadm.go:392] StartCluster complete in 33.464211102s
	I1019 23:54:01.561656  117106 settings.go:142] acquiring lock: {Name:mkae5b2371003bb83973886295163ea24b8b77a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 23:54:01.561751  117106 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/kubeconfig
	I1019 23:54:01.563336  117106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/kubeconfig: {Name:mk397e230cae60dd4f11434c0167f3868cd3be72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 23:54:01.564385  117106 kapi.go:59] client config for offline-containerd-20211019235243-6808: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/offline-containerd-20211019235243-6808/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles
/offline-containerd-20211019235243-6808/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x158bd60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1019 23:54:02.121499  117106 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "offline-containerd-20211019235243-6808" rescaled to 1
	I1019 23:54:02.121558  117106 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.22.2 ControlPlane:true Worker:true}
	I1019 23:54:01.995385  131599 out.go:176]   - MINIKUBE_LOCATION=12739
	I1019 23:54:01.996417  131599 config.go:176] Loaded profile config "pause-20211019235243-6808": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.2
	I1019 23:54:01.996990  131599 driver.go:343] Setting default libvirt URI to qemu:///system
	I1019 23:54:02.057621  131599 docker.go:132] docker version: linux-19.03.15
	I1019 23:54:02.057720  131599 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1019 23:54:02.190640  131599 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:5 ContainersRunning:5 ContainersPaused:0 ContainersStopped:0 Images:188 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:71 OomKillDisable:true NGoroutines:77 SystemTime:2021-10-19 23:54:02.101223378 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1019 23:54:02.190747  131599 docker.go:237] overlay module found
	I1019 23:54:02.123613  117106 out.go:176] * Verifying Kubernetes components...
	I1019 23:54:02.121644  117106 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1019 23:54:02.123706  117106 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
	I1019 23:54:02.121859  117106 config.go:176] Loaded profile config "offline-containerd-20211019235243-6808": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.2
	I1019 23:54:02.121877  117106 addons.go:404] enableAddons start: toEnable=map[], additional=[]
	I1019 23:54:02.123915  117106 addons.go:65] Setting storage-provisioner=true in profile "offline-containerd-20211019235243-6808"
	I1019 23:54:02.123925  117106 addons.go:65] Setting default-storageclass=true in profile "offline-containerd-20211019235243-6808"
	I1019 23:54:02.123937  117106 addons.go:153] Setting addon storage-provisioner=true in "offline-containerd-20211019235243-6808"
	W1019 23:54:02.123945  117106 addons.go:165] addon storage-provisioner should already be in state true
	I1019 23:54:02.123946  117106 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "offline-containerd-20211019235243-6808"
	I1019 23:54:02.123979  117106 host.go:66] Checking if "offline-containerd-20211019235243-6808" exists ...
	I1019 23:54:02.124303  117106 cli_runner.go:115] Run: docker container inspect offline-containerd-20211019235243-6808 --format={{.State.Status}}
	I1019 23:54:02.124473  117106 cli_runner.go:115] Run: docker container inspect offline-containerd-20211019235243-6808 --format={{.State.Status}}
	I1019 23:54:02.199419  117106 kapi.go:59] client config for offline-containerd-20211019235243-6808: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/offline-containerd-20211019235243-6808/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles
/offline-containerd-20211019235243-6808/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x158bd60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1019 23:54:02.206733  117106 addons.go:153] Setting addon default-storageclass=true in "offline-containerd-20211019235243-6808"
	W1019 23:54:02.206755  117106 addons.go:165] addon default-storageclass should already be in state true
	I1019 23:54:02.206872  117106 host.go:66] Checking if "offline-containerd-20211019235243-6808" exists ...
	I1019 23:54:02.207447  117106 cli_runner.go:115] Run: docker container inspect offline-containerd-20211019235243-6808 --format={{.State.Status}}
	I1019 23:54:02.194056  131599 out.go:176] * Using the docker driver based on existing profile
	I1019 23:54:02.194094  131599 start.go:278] selected driver: docker
	I1019 23:54:02.194101  131599 start.go:751] validating driver "docker" against &{Name:pause-20211019235243-6808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.27-1633027942-12081@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.2 ClusterName:pause-20211019235243-6808 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.2 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host}
	I1019 23:54:02.194217  131599 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1019 23:54:02.194710  131599 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1019 23:54:02.342924  131599 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:5 ContainersRunning:5 ContainersPaused:0 ContainersStopped:0 Images:188 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:70 OomKillDisable:true NGoroutines:75 SystemTime:2021-10-19 23:54:02.261728618 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1019 23:54:02.343733  131599 cni.go:93] Creating CNI manager for ""
	I1019 23:54:02.343750  131599 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1019 23:54:02.343771  131599 start_flags.go:280] config:
	{Name:pause-20211019235243-6808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.27-1633027942-12081@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.2 ClusterName:pause-20211019235243-6808 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.2 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host}
	I1019 23:54:02.346629  131599 out.go:176] * Starting control plane node pause-20211019235243-6808 in cluster pause-20211019235243-6808
	I1019 23:54:02.346674  131599 cache.go:118] Beginning downloading kic base image for docker with containerd
	I1019 23:54:02.348411  131599 out.go:176] * Pulling base image ...
	I1019 23:54:02.348444  131599 preload.go:130] Checking if preload exists for k8s version v1.22.2 and runtime containerd
	I1019 23:54:02.348489  131599 preload.go:146] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v13-v1.22.2-containerd-overlay2-amd64.tar.lz4
	I1019 23:54:02.348507  131599 cache.go:57] Caching tarball of preloaded images
	I1019 23:54:02.348700  131599 preload.go:172] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v13-v1.22.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1019 23:54:02.348718  131599 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.2 on containerd
	I1019 23:54:02.348860  131599 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/pause-20211019235243-6808/config.json ...
	I1019 23:54:02.348997  131599 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.27-1633027942-12081@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1019 23:54:02.487854  131599 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.27-1633027942-12081@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1019 23:54:02.487886  131599 cache.go:140] gcr.io/k8s-minikube/kicbase-builds:v0.0.27-1633027942-12081@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1019 23:54:02.487906  131599 cache.go:206] Successfully downloaded all kic artifacts
	I1019 23:54:02.487945  131599 start.go:313] acquiring machines lock for pause-20211019235243-6808: {Name:mkc0a923bfc4801d4784f9292167e4077bfab94b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 23:54:02.488043  131599 start.go:317] acquired machines lock for "pause-20211019235243-6808" in 72.932µs
	I1019 23:54:02.488076  131599 start.go:93] Skipping create...Using existing machine configuration
	I1019 23:54:02.488083  131599 fix.go:55] fixHost starting: 
	I1019 23:54:02.488424  131599 cli_runner.go:115] Run: docker container inspect pause-20211019235243-6808 --format={{.State.Status}}
	I1019 23:54:02.554457  131599 fix.go:108] recreateIfNeeded on pause-20211019235243-6808: state=Running err=<nil>
	W1019 23:54:02.554492  131599 fix.go:134] unexpected machine state, will restart: <nil>
	I1019 23:54:02.216440  117106 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 23:54:02.216591  117106 addons.go:337] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 23:54:02.216604  117106 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 23:54:02.216669  117106 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20211019235243-6808
	I1019 23:54:02.249021  117106 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.22.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1019 23:54:02.249971  117106 kapi.go:59] client config for offline-containerd-20211019235243-6808: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/offline-containerd-20211019235243-6808/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles
/offline-containerd-20211019235243-6808/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x158bd60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1019 23:54:02.251516  117106 node_ready.go:35] waiting up to 6m0s for node "offline-containerd-20211019235243-6808" to be "Ready" ...
	I1019 23:54:02.270021  117106 addons.go:337] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 23:54:02.270050  117106 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 23:54:02.270114  117106 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20211019235243-6808
	I1019 23:54:02.300253  117106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32892 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/offline-containerd-20211019235243-6808/id_rsa Username:docker}
	I1019 23:54:02.323853  117106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32892 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/offline-containerd-20211019235243-6808/id_rsa Username:docker}
	I1019 23:54:02.516868  117106 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 23:54:02.522949  117106 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 23:54:02.627581  117106 start.go:729] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS
	I1019 23:54:02.962179  117106 out.go:176] * Enabled addons: storage-provisioner, default-storageclass
	I1019 23:54:02.962212  117106 addons.go:406] enableAddons completed in 840.339568ms
	I1019 23:54:02.557866  131599 out.go:176] * Updating the running docker "pause-20211019235243-6808" container ...
	I1019 23:54:02.557903  131599 machine.go:88] provisioning docker machine ...
	I1019 23:54:02.557935  131599 ubuntu.go:169] provisioning hostname "pause-20211019235243-6808"
	I1019 23:54:02.557988  131599 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211019235243-6808
	I1019 23:54:02.601657  131599 main.go:130] libmachine: Using SSH client type: native
	I1019 23:54:02.601834  131599 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0b00] 0x7a3be0 <nil>  [] 0s} 127.0.0.1 32887 <nil> <nil>}
	I1019 23:54:02.601847  131599 main.go:130] libmachine: About to run SSH command:
	sudo hostname pause-20211019235243-6808 && echo "pause-20211019235243-6808" | sudo tee /etc/hostname
	I1019 23:54:02.728708  131599 main.go:130] libmachine: SSH cmd err, output: <nil>: pause-20211019235243-6808
	
	I1019 23:54:02.728825  131599 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211019235243-6808
	I1019 23:54:02.773907  131599 main.go:130] libmachine: Using SSH client type: native
	I1019 23:54:02.774083  131599 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0b00] 0x7a3be0 <nil>  [] 0s} 127.0.0.1 32887 <nil> <nil>}
	I1019 23:54:02.774105  131599 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-20211019235243-6808' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-20211019235243-6808/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-20211019235243-6808' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 23:54:02.884381  131599 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I1019 23:54:02.884432  131599 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube}
	I1019 23:54:02.884474  131599 ubuntu.go:177] setting up certificates
	I1019 23:54:02.884486  131599 provision.go:83] configureAuth start
	I1019 23:54:02.884544  131599 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-20211019235243-6808
	I1019 23:54:02.938306  131599 provision.go:138] copyHostCerts
	I1019 23:54:02.938363  131599 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/ca.pem, removing ...
	I1019 23:54:02.938377  131599 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/ca.pem
	I1019 23:54:02.938426  131599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/ca.pem (1082 bytes)
	I1019 23:54:02.939168  131599 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cert.pem, removing ...
	I1019 23:54:02.939244  131599 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cert.pem
	I1019 23:54:02.939289  131599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cert.pem (1123 bytes)
	I1019 23:54:02.939379  131599 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/key.pem, removing ...
	I1019 23:54:02.939386  131599 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/key.pem
	I1019 23:54:02.939417  131599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/key.pem (1679 bytes)
	I1019 23:54:02.939503  131599 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/ca-key.pem org=jenkins.pause-20211019235243-6808 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube pause-20211019235243-6808]
	I1019 23:54:03.036007  131599 provision.go:172] copyRemoteCerts
	I1019 23:54:03.036070  131599 ssh_runner.go:152] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 23:54:03.036125  131599 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211019235243-6808
	I1019 23:54:03.079273  131599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32887 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/pause-20211019235243-6808/id_rsa Username:docker}
	I1019 23:54:03.162931  131599 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1019 23:54:03.180262  131599 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1019 23:54:03.196436  131599 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 23:54:03.213128  131599 provision.go:86] duration metric: configureAuth took 328.62939ms
	I1019 23:54:03.213153  131599 ubuntu.go:193] setting minikube options for container-runtime
	I1019 23:54:03.213335  131599 config.go:176] Loaded profile config "pause-20211019235243-6808": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.2
	I1019 23:54:03.213352  131599 machine.go:91] provisioned docker machine in 655.443039ms
	I1019 23:54:03.213358  131599 start.go:267] post-start starting for "pause-20211019235243-6808" (driver="docker")
	I1019 23:54:03.213364  131599 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 23:54:03.213401  131599 ssh_runner.go:152] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 23:54:03.213432  131599 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211019235243-6808
	I1019 23:54:03.256372  131599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32887 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/pause-20211019235243-6808/id_rsa Username:docker}
	I1019 23:54:03.340108  131599 ssh_runner.go:152] Run: cat /etc/os-release
	I1019 23:54:03.342865  131599 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1019 23:54:03.342893  131599 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 23:54:03.342906  131599 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1019 23:54:03.342913  131599 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I1019 23:54:03.342923  131599 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/addons for local assets ...
	I1019 23:54:03.342979  131599 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/files for local assets ...
	I1019 23:54:03.343074  131599 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/files/etc/ssl/certs/68082.pem -> 68082.pem in /etc/ssl/certs
	I1019 23:54:03.343197  131599 ssh_runner.go:152] Run: sudo mkdir -p /etc/ssl/certs
	I1019 23:54:03.349424  131599 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/files/etc/ssl/certs/68082.pem --> /etc/ssl/certs/68082.pem (1708 bytes)
	I1019 23:54:03.365039  131599 start.go:270] post-start completed in 151.667975ms
	I1019 23:54:03.365104  131599 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 23:54:03.365151  131599 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211019235243-6808
	I1019 23:54:03.410146  131599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32887 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/pause-20211019235243-6808/id_rsa Username:docker}
	I1019 23:54:03.488211  131599 fix.go:57] fixHost completed within 1.000123586s
	I1019 23:54:03.488239  131599 start.go:80] releasing machines lock for "pause-20211019235243-6808", held for 1.000178175s
	I1019 23:54:03.488328  131599 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-20211019235243-6808
	I1019 23:54:03.530330  131599 ssh_runner.go:152] Run: systemctl --version
	I1019 23:54:03.530358  131599 ssh_runner.go:152] Run: curl -sS -m 2 https://k8s.gcr.io/
	I1019 23:54:03.530383  131599 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211019235243-6808
	I1019 23:54:03.530419  131599 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211019235243-6808
	I1019 23:54:03.575605  131599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32887 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/pause-20211019235243-6808/id_rsa Username:docker}
	I1019 23:54:03.576796  131599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32887 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/pause-20211019235243-6808/id_rsa Username:docker}
	I1019 23:54:03.651956  131599 ssh_runner.go:152] Run: sudo systemctl stop -f crio
	I1019 23:54:03.678742  131599 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service crio
	I1019 23:54:03.687484  131599 docker.go:156] disabling docker service ...
	I1019 23:54:03.687535  131599 ssh_runner.go:152] Run: sudo systemctl stop -f docker.socket
	I1019 23:54:03.696305  131599 ssh_runner.go:152] Run: sudo systemctl stop -f docker.service
	I1019 23:54:03.704580  131599 ssh_runner.go:152] Run: sudo systemctl disable docker.socket
	I1019 23:54:03.793059  131599 ssh_runner.go:152] Run: sudo systemctl mask docker.service
	I1019 23:54:03.885008  131599 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service docker
	I1019 23:54:03.894150  131599 ssh_runner.go:152] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 23:54:03.906323  131599 ssh_runner.go:152] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwcm94eV9wbHVnaW5zXQojIGZ1c2Utb3ZlcmxheWZzIGlzIHVzZWQgZm9yIHJvb3RsZXNzCltwcm94eV9wbHVnaW5zLiJmdXNlLW92ZXJsYXlmcyJdCiAgdHlwZSA9ICJzbmFwc2hvdCIKICBhZGRyZXNzID0gIi9ydW4vY29udGFpbmVyZC1mdXNlLW92ZXJsYXlmcy5zb2NrIgoKW3BsdWdpbnNdCiAgW3BsdWdpbnMuY2dyb3Vwc10KICAgIG5vX3Byb21ldGhldXMgPSBmYWxzZQogIFtwbHVnaW5zLmNyaV0KICAgIHN0cmVhbV9zZXJ2ZXJfYWRkcmVzcyA9ICIiCiAgICBzdHJlYW1fc2VydmVyX3BvcnQgPSAiMTAwMTAiCiAgICBlbmFibGVfc2VsaW51eCA9IGZhbHNlCiAgICBzY
W5kYm94X2ltYWdlID0gIms4cy5nY3IuaW8vcGF1c2U6My41IgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKICAgIHJlc3RyaWN0X29vbV9zY29yZV9hZGogPSBmYWxzZQoKCVtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0KICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmRdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXNdCiAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jXQogICAgICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdCiAgICAgICAgICAgICAgU3lzdGVtZENncm91cCA9IGZhbHNlCgogICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmRdCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLnVud
HJ1c3RlZF93b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICBbcGx1Z2lucy5jcmkuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0Lm1rIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuZGlmZi1zZXJ2aWNlXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuc2NoZWR1bGVyXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I1019 23:54:03.918663  131599 ssh_runner.go:152] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 23:54:03.925431  131599 crio.go:137] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1019 23:54:03.925489  131599 ssh_runner.go:152] Run: sudo modprobe br_netfilter
	I1019 23:54:03.936191  131599 ssh_runner.go:152] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 23:54:03.944221  131599 ssh_runner.go:152] Run: sudo systemctl daemon-reload
	I1019 23:54:04.034015  131599 ssh_runner.go:152] Run: sudo systemctl restart containerd
	I1019 23:54:04.116122  131599 start.go:393] Will wait 60s for socket path /run/containerd/containerd.sock
	I1019 23:54:04.116189  131599 ssh_runner.go:152] Run: stat /run/containerd/containerd.sock
	I1019 23:54:04.120429  131599 start.go:414] Will wait 60s for crictl version
	I1019 23:54:04.120493  131599 ssh_runner.go:152] Run: sudo crictl version
	I1019 23:54:04.155949  131599 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-10-19T23:54:04Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I1019 23:54:07.803927  127260 ssh_runner.go:152] Run: sudo crictl version
	I1019 23:54:07.818421  127260 start.go:423] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.3
	RuntimeApiVersion:  v1alpha2
	I1019 23:54:07.818483  127260 ssh_runner.go:152] Run: containerd --version
	I1019 23:54:07.841097  127260 ssh_runner.go:152] Run: containerd --version
	I1019 23:54:04.260194  117106 node_ready.go:58] node "offline-containerd-20211019235243-6808" has status "Ready":"False"
	I1019 23:54:06.260828  117106 node_ready.go:58] node "offline-containerd-20211019235243-6808" has status "Ready":"False"
	I1019 23:54:07.867948  127260 out.go:176] * Preparing Kubernetes v1.20.0 on containerd 1.4.3 ...
	I1019 23:54:07.868034  127260 cli_runner.go:115] Run: docker network inspect stopped-upgrade-20211019235243-6808 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 23:54:07.910079  127260 ssh_runner.go:152] Run: grep 192.168.59.1	host.minikube.internal$ /etc/hosts
	I1019 23:54:07.913650  127260 ssh_runner.go:152] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.59.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 23:54:07.923569  127260 preload.go:130] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1019 23:54:07.923623  127260 ssh_runner.go:152] Run: sudo crictl images --output json
	I1019 23:54:07.937157  127260 containerd.go:631] couldn't find preloaded image for "gcr.io/k8s-minikube/storage-provisioner:v5". assuming images are not preloaded.
	I1019 23:54:07.937175  127260 cache_images.go:83] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.20.0 k8s.gcr.io/kube-controller-manager:v1.20.0 k8s.gcr.io/kube-scheduler:v1.20.0 k8s.gcr.io/kube-proxy:v1.20.0 k8s.gcr.io/pause:3.2 k8s.gcr.io/etcd:3.4.13-0 k8s.gcr.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5 docker.io/kubernetesui/dashboard:v2.3.1 docker.io/kubernetesui/metrics-scraper:v1.0.7]
	I1019 23:54:07.937240  127260 image.go:134] retrieving image: docker.io/kubernetesui/metrics-scraper:v1.0.7
	I1019 23:54:07.937245  127260 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.20.0
	I1019 23:54:07.937260  127260 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.20.0
	I1019 23:54:07.937273  127260 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.20.0
	I1019 23:54:07.937296  127260 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.20.0
	I1019 23:54:07.937418  127260 image.go:134] retrieving image: k8s.gcr.io/pause:3.2
	I1019 23:54:07.937462  127260 image.go:134] retrieving image: docker.io/kubernetesui/dashboard:v2.3.1
	I1019 23:54:07.937481  127260 image.go:134] retrieving image: k8s.gcr.io/coredns:1.7.0
	I1019 23:54:07.937489  127260 image.go:134] retrieving image: k8s.gcr.io/etcd:3.4.13-0
	I1019 23:54:07.937497  127260 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 23:54:07.938348  127260 image.go:180] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.20.0: Error response from daemon: reference does not exist
	I1019 23:54:07.938354  127260 image.go:180] daemon lookup for k8s.gcr.io/kube-apiserver:v1.20.0: Error response from daemon: reference does not exist
	I1019 23:54:07.938383  127260 image.go:180] daemon lookup for k8s.gcr.io/kube-proxy:v1.20.0: Error response from daemon: reference does not exist
	I1019 23:54:07.938395  127260 image.go:180] daemon lookup for docker.io/kubernetesui/metrics-scraper:v1.0.7: Error response from daemon: reference does not exist
	I1019 23:54:07.938403  127260 image.go:180] daemon lookup for k8s.gcr.io/kube-scheduler:v1.20.0: Error response from daemon: reference does not exist
	I1019 23:54:07.938692  127260 image.go:180] daemon lookup for docker.io/kubernetesui/dashboard:v2.3.1: Error response from daemon: reference does not exist
	I1019 23:54:07.946191  127260 image.go:176] found k8s.gcr.io/pause:3.2 locally: &{UncompressedImageCore:0xc000010900 lock:{state:0 sema:0} manifest:<nil>}
	I1019 23:54:07.946235  127260 ssh_runner.go:152] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/pause:3.2"
	I1019 23:54:08.072023  127260 ssh_runner.go:152] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-proxy:v1.20.0"
	I1019 23:54:08.090699  127260 ssh_runner.go:152] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-controller-manager:v1.20.0"
	I1019 23:54:08.095049  127260 ssh_runner.go:152] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-scheduler:v1.20.0"
	I1019 23:54:08.097418  127260 ssh_runner.go:152] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-apiserver:v1.20.0"
	I1019 23:54:08.290951  127260 image.go:176] found gcr.io/k8s-minikube/storage-provisioner:v5 locally: &{UncompressedImageCore:0xc00059cb00 lock:{state:0 sema:0} manifest:<nil>}
	I1019 23:54:08.291005  127260 ssh_runner.go:152] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
	I1019 23:54:08.400874  127260 image.go:176] found k8s.gcr.io/coredns:1.7.0 locally: &{UncompressedImageCore:0xc000138228 lock:{state:0 sema:0} manifest:<nil>}
	I1019 23:54:08.400931  127260 ssh_runner.go:152] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/coredns:1.7.0"
	I1019 23:54:08.426407  127260 ssh_runner.go:152] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep docker.io/kubernetesui/metrics-scraper:v1.0.7"
	I1019 23:54:08.432559  127260 ssh_runner.go:152] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep docker.io/kubernetesui/dashboard:v2.3.1"
	I1019 23:54:08.518784  127260 cache_images.go:111] "k8s.gcr.io/pause:3.2" needs transfer: "k8s.gcr.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1019 23:54:08.518837  127260 cri.go:204] Removing image: k8s.gcr.io/pause:3.2
	I1019 23:54:08.518885  127260 ssh_runner.go:152] Run: which crictl
	I1019 23:54:08.753719  127260 cache_images.go:111] "k8s.gcr.io/kube-proxy:v1.20.0" needs transfer: "k8s.gcr.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1019 23:54:08.753763  127260 cri.go:204] Removing image: k8s.gcr.io/kube-proxy:v1.20.0
	I1019 23:54:08.753812  127260 ssh_runner.go:152] Run: which crictl
	I1019 23:54:08.912647  127260 cache_images.go:111] "k8s.gcr.io/kube-controller-manager:v1.20.0" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1019 23:54:08.912696  127260 cri.go:204] Removing image: k8s.gcr.io/kube-controller-manager:v1.20.0
	I1019 23:54:08.912743  127260 ssh_runner.go:152] Run: which crictl
	I1019 23:54:08.912856  127260 cache_images.go:111] "k8s.gcr.io/kube-scheduler:v1.20.0" needs transfer: "k8s.gcr.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1019 23:54:08.912888  127260 cri.go:204] Removing image: k8s.gcr.io/kube-scheduler:v1.20.0
	I1019 23:54:08.912922  127260 ssh_runner.go:152] Run: which crictl
	I1019 23:54:08.913008  127260 cache_images.go:111] "k8s.gcr.io/kube-apiserver:v1.20.0" needs transfer: "k8s.gcr.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1019 23:54:08.913031  127260 cri.go:204] Removing image: k8s.gcr.io/kube-apiserver:v1.20.0
	I1019 23:54:08.913058  127260 ssh_runner.go:152] Run: which crictl
	I1019 23:54:09.052042  127260 cache_images.go:111] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1019 23:54:09.052085  127260 cri.go:204] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 23:54:09.052126  127260 ssh_runner.go:152] Run: which crictl
	I1019 23:54:09.053420  127260 cache_images.go:111] "k8s.gcr.io/coredns:1.7.0" needs transfer: "k8s.gcr.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1019 23:54:09.053455  127260 cri.go:204] Removing image: k8s.gcr.io/coredns:1.7.0
	I1019 23:54:09.053487  127260 ssh_runner.go:152] Run: which crictl
	I1019 23:54:09.128134  127260 cache_images.go:111] "docker.io/kubernetesui/metrics-scraper:v1.0.7" needs transfer: "docker.io/kubernetesui/metrics-scraper:v1.0.7" does not exist at hash "7801cfc6d5c072eb114355d369c830641064a246b5a774bcd668fac75ec728e9" in container runtime
	I1019 23:54:09.128190  127260 cri.go:204] Removing image: docker.io/kubernetesui/metrics-scraper:v1.0.7
	I1019 23:54:09.128134  127260 cache_images.go:111] "docker.io/kubernetesui/dashboard:v2.3.1" needs transfer: "docker.io/kubernetesui/dashboard:v2.3.1" does not exist at hash "e1482a24335a6e76d438ae175f79409004588570d3e5dbb4c8140e025e848570" in container runtime
	I1019 23:54:09.128224  127260 ssh_runner.go:152] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/pause:3.2
	I1019 23:54:09.128240  127260 cri.go:204] Removing image: docker.io/kubernetesui/dashboard:v2.3.1
	I1019 23:54:09.128278  127260 ssh_runner.go:152] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-proxy:v1.20.0
	I1019 23:54:09.128230  127260 ssh_runner.go:152] Run: which crictl
	I1019 23:54:09.128285  127260 ssh_runner.go:152] Run: which crictl
	I1019 23:54:09.128352  127260 ssh_runner.go:152] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-controller-manager:v1.20.0
	I1019 23:54:09.128379  127260 ssh_runner.go:152] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-scheduler:v1.20.0
	I1019 23:54:09.128449  127260 ssh_runner.go:152] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.20.0
	I1019 23:54:09.128473  127260 ssh_runner.go:152] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 23:54:09.128527  127260 ssh_runner.go:152] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/coredns:1.7.0
	I1019 23:54:08.262291  117106 node_ready.go:58] node "offline-containerd-20211019235243-6808" has status "Ready":"False"
	I1019 23:54:10.760934  117106 node_ready.go:58] node "offline-containerd-20211019235243-6808" has status "Ready":"False"
	I1019 23:54:09.289353  127260 ssh_runner.go:152] Run: sudo /usr/bin/crictl rmi docker.io/kubernetesui/dashboard:v2.3.1
	I1019 23:54:09.289370  127260 cache_images.go:281] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.20.0
	I1019 23:54:09.289436  127260 ssh_runner.go:152] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.20.0
	I1019 23:54:09.289445  127260 ssh_runner.go:152] Run: sudo /usr/bin/crictl rmi docker.io/kubernetesui/metrics-scraper:v1.0.7
	I1019 23:54:09.289500  127260 cache_images.go:281] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/images/k8s.gcr.io/pause_3.2
	I1019 23:54:09.289525  127260 cache_images.go:281] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.20.0
	I1019 23:54:09.289577  127260 ssh_runner.go:152] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.20.0
	I1019 23:54:09.289587  127260 ssh_runner.go:152] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.2
	I1019 23:54:09.289598  127260 cache_images.go:281] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.20.0
	I1019 23:54:09.289633  127260 cache_images.go:281] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.20.0
	I1019 23:54:09.289671  127260 ssh_runner.go:152] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.20.0
	I1019 23:54:09.289684  127260 ssh_runner.go:152] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.20.0
	I1019 23:54:09.289718  127260 cache_images.go:281] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5
	I1019 23:54:09.289753  127260 cache_images.go:281] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/images/k8s.gcr.io/coredns_1.7.0
	I1019 23:54:09.289775  127260 ssh_runner.go:152] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1019 23:54:09.289809  127260 ssh_runner.go:152] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_1.7.0
	I1019 23:54:09.308984  127260 ssh_runner.go:309] existence check for /var/lib/minikube/images/kube-apiserver_v1.20.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.20.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-apiserver_v1.20.0': No such file or directory
	I1019 23:54:09.309013  127260 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.20.0 --> /var/lib/minikube/images/kube-apiserver_v1.20.0 (30411776 bytes)
	I1019 23:54:09.309055  127260 cache_images.go:281] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1
	I1019 23:54:09.309077  127260 ssh_runner.go:309] existence check for /var/lib/minikube/images/kube-proxy_v1.20.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.20.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-proxy_v1.20.0': No such file or directory
	I1019 23:54:09.309096  127260 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.20.0 --> /var/lib/minikube/images/kube-proxy_v1.20.0 (49545216 bytes)
	I1019 23:54:09.309124  127260 ssh_runner.go:152] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/dashboard_v2.3.1
	I1019 23:54:09.309133  127260 cache_images.go:281] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7
	I1019 23:54:09.309180  127260 ssh_runner.go:152] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/metrics-scraper_v1.0.7
	I1019 23:54:09.309204  127260 ssh_runner.go:309] existence check for /var/lib/minikube/images/pause_3.2: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.2: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/pause_3.2': No such file or directory
	I1019 23:54:09.309226  127260 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/images/k8s.gcr.io/pause_3.2 --> /var/lib/minikube/images/pause_3.2 (325632 bytes)
	I1019 23:54:09.309234  127260 ssh_runner.go:309] existence check for /var/lib/minikube/images/kube-scheduler_v1.20.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.20.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-scheduler_v1.20.0': No such file or directory
	I1019 23:54:09.309249  127260 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.20.0 --> /var/lib/minikube/images/kube-scheduler_v1.20.0 (14016512 bytes)
	I1019 23:54:09.309255  127260 ssh_runner.go:309] existence check for /var/lib/minikube/images/coredns_1.7.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_1.7.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/coredns_1.7.0': No such file or directory
	I1019 23:54:09.309273  127260 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/images/k8s.gcr.io/coredns_1.7.0 --> /var/lib/minikube/images/coredns_1.7.0 (16093184 bytes)
	I1019 23:54:09.309180  127260 ssh_runner.go:309] existence check for /var/lib/minikube/images/kube-controller-manager_v1.20.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.20.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-controller-manager_v1.20.0': No such file or directory
	I1019 23:54:09.309300  127260 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.20.0 --> /var/lib/minikube/images/kube-controller-manager_v1.20.0 (29364736 bytes)
	I1019 23:54:09.309307  127260 ssh_runner.go:309] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1019 23:54:09.309323  127260 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1019 23:54:09.320525  127260 ssh_runner.go:309] existence check for /var/lib/minikube/images/dashboard_v2.3.1: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/dashboard_v2.3.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/dashboard_v2.3.1': No such file or directory
	I1019 23:54:09.320559  127260 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1 --> /var/lib/minikube/images/dashboard_v2.3.1 (66936320 bytes)
	I1019 23:54:09.320617  127260 ssh_runner.go:309] existence check for /var/lib/minikube/images/metrics-scraper_v1.0.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/metrics-scraper_v1.0.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/metrics-scraper_v1.0.7': No such file or directory
	I1019 23:54:09.320628  127260 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7 --> /var/lib/minikube/images/metrics-scraper_v1.0.7 (15031296 bytes)
	I1019 23:54:09.364476  127260 containerd.go:305] Loading image: /var/lib/minikube/images/pause_3.2
	I1019 23:54:09.364539  127260 ssh_runner.go:152] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.2
	I1019 23:54:09.618104  127260 cache_images.go:310] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/images/k8s.gcr.io/pause_3.2 from cache
	I1019 23:54:09.618152  127260 containerd.go:305] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1019 23:54:09.618208  127260 ssh_runner.go:152] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1019 23:54:10.218369  127260 image.go:176] found k8s.gcr.io/etcd:3.4.13-0 locally: &{UncompressedImageCore:0xc000138278 lock:{state:0 sema:0} manifest:<nil>}
	I1019 23:54:10.218450  127260 ssh_runner.go:152] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/etcd:3.4.13-0"
	I1019 23:54:10.366574  127260 cache_images.go:310] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1019 23:54:10.366623  127260 containerd.go:305] Loading image: /var/lib/minikube/images/kube-scheduler_v1.20.0
	I1019 23:54:10.366677  127260 ssh_runner.go:152] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.20.0
	I1019 23:54:10.412224  127260 cache_images.go:111] "k8s.gcr.io/etcd:3.4.13-0" needs transfer: "k8s.gcr.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1019 23:54:10.412270  127260 cri.go:204] Removing image: k8s.gcr.io/etcd:3.4.13-0
	I1019 23:54:10.412316  127260 ssh_runner.go:152] Run: which crictl
	I1019 23:54:10.657273  127260 cache_images.go:310] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.20.0 from cache
	I1019 23:54:10.657309  127260 containerd.go:305] Loading image: /var/lib/minikube/images/coredns_1.7.0
	I1019 23:54:10.657347  127260 ssh_runner.go:152] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_1.7.0
	I1019 23:54:10.657355  127260 ssh_runner.go:152] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/etcd:3.4.13-0
	I1019 23:54:10.688355  127260 cache_images.go:281] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-0
	I1019 23:54:10.688434  127260 ssh_runner.go:152] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.4.13-0
	I1019 23:54:11.356734  127260 cache_images.go:310] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/images/k8s.gcr.io/coredns_1.7.0 from cache
	I1019 23:54:11.356774  127260 containerd.go:305] Loading image: /var/lib/minikube/images/metrics-scraper_v1.0.7
	I1019 23:54:11.356818  127260 ssh_runner.go:309] existence check for /var/lib/minikube/images/etcd_3.4.13-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.4.13-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/etcd_3.4.13-0': No such file or directory
	I1019 23:54:11.356828  127260 ssh_runner.go:152] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/metrics-scraper_v1.0.7
	I1019 23:54:11.356867  127260 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-0 --> /var/lib/minikube/images/etcd_3.4.13-0 (98416128 bytes)
	I1019 23:54:12.012616  127260 cache_images.go:310] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7 from cache
	I1019 23:54:12.012662  127260 containerd.go:305] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.20.0
	I1019 23:54:12.012715  127260 ssh_runner.go:152] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.20.0
	I1019 23:54:12.932799  127260 cache_images.go:310] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.20.0 from cache
	I1019 23:54:12.932844  127260 containerd.go:305] Loading image: /var/lib/minikube/images/kube-apiserver_v1.20.0
	I1019 23:54:12.932901  127260 ssh_runner.go:152] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.20.0
	I1019 23:54:13.377069  127260 cache_images.go:310] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.20.0 from cache
	I1019 23:54:13.377114  127260 containerd.go:305] Loading image: /var/lib/minikube/images/kube-proxy_v1.20.0
	I1019 23:54:13.377157  127260 ssh_runner.go:152] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.20.0
	I1019 23:54:15.204925  131599 ssh_runner.go:152] Run: sudo crictl version
	I1019 23:54:15.233409  131599 start.go:423] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.9
	RuntimeApiVersion:  v1alpha2
	I1019 23:54:15.233482  131599 ssh_runner.go:152] Run: containerd --version
	I1019 23:54:15.260337  131599 ssh_runner.go:152] Run: containerd --version
	I1019 23:54:15.291144  131599 out.go:176] * Preparing Kubernetes v1.22.2 on containerd 1.4.9 ...
	I1019 23:54:15.291219  131599 cli_runner.go:115] Run: docker network inspect pause-20211019235243-6808 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 23:54:15.340129  131599 ssh_runner.go:152] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1019 23:54:15.344123  131599 preload.go:130] Checking if preload exists for k8s version v1.22.2 and runtime containerd
	I1019 23:54:15.344188  131599 ssh_runner.go:152] Run: sudo crictl images --output json
	I1019 23:54:15.371134  131599 containerd.go:635] all images are preloaded for containerd runtime.
	I1019 23:54:15.371159  131599 containerd.go:539] Images already preloaded, skipping extraction
	I1019 23:54:15.371205  131599 ssh_runner.go:152] Run: sudo crictl images --output json
	I1019 23:54:15.399486  131599 containerd.go:635] all images are preloaded for containerd runtime.
	I1019 23:54:15.399512  131599 cache_images.go:79] Images are preloaded, skipping loading
	I1019 23:54:15.399563  131599 ssh_runner.go:152] Run: sudo crictl info
	I1019 23:54:15.426349  131599 cni.go:93] Creating CNI manager for ""
	I1019 23:54:15.426373  131599 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1019 23:54:15.426385  131599 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1019 23:54:15.426401  131599 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.22.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-20211019235243-6808 NodeName:pause-20211019235243-6808 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/l
ib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I1019 23:54:15.426573  131599 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "pause-20211019235243-6808"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.22.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 23:54:15.426690  131599 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.22.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=pause-20211019235243-6808 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.22.2 ClusterName:pause-20211019235243-6808 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1019 23:54:15.426739  131599 ssh_runner.go:152] Run: sudo ls /var/lib/minikube/binaries/v1.22.2
	I1019 23:54:15.434806  131599 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 23:54:15.434868  131599 ssh_runner.go:152] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 23:54:15.441937  131599 ssh_runner.go:319] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (570 bytes)
	I1019 23:54:15.455406  131599 ssh_runner.go:319] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 23:54:15.470459  131599 ssh_runner.go:319] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2075 bytes)
	I1019 23:54:15.482969  131599 ssh_runner.go:152] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1019 23:54:15.485807  131599 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/pause-20211019235243-6808 for IP: 192.168.49.2
	I1019 23:54:15.485912  131599 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/ca.key
	I1019 23:54:15.485967  131599 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/proxy-client-ca.key
	I1019 23:54:15.486054  131599 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/pause-20211019235243-6808/client.key
	I1019 23:54:15.486128  131599 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/pause-20211019235243-6808/apiserver.key.dd3b5fb2
	I1019 23:54:15.486182  131599 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/pause-20211019235243-6808/proxy-client.key
	I1019 23:54:15.486310  131599 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/6808.pem (1338 bytes)
	W1019 23:54:15.486352  131599 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/6808_empty.pem, impossibly tiny 0 bytes
	I1019 23:54:15.486371  131599 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/ca-key.pem (1679 bytes)
	I1019 23:54:15.486403  131599 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/ca.pem (1082 bytes)
	I1019 23:54:15.486437  131599 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/cert.pem (1123 bytes)
	I1019 23:54:15.486473  131599 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/key.pem (1679 bytes)
	I1019 23:54:15.486546  131599 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/files/etc/ssl/certs/68082.pem (1708 bytes)
	I1019 23:54:15.487483  131599 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/pause-20211019235243-6808/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1019 23:54:15.508914  131599 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/pause-20211019235243-6808/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1019 23:54:15.526258  131599 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/pause-20211019235243-6808/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 23:54:15.544317  131599 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/pause-20211019235243-6808/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1019 23:54:15.563570  131599 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 23:54:15.582352  131599 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1019 23:54:15.605313  131599 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 23:54:15.624023  131599 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1019 23:54:15.642515  131599 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/6808.pem --> /usr/share/ca-certificates/6808.pem (1338 bytes)
	I1019 23:54:15.660548  131599 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/files/etc/ssl/certs/68082.pem --> /usr/share/ca-certificates/68082.pem (1708 bytes)
	I1019 23:54:15.678587  131599 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 23:54:15.700410  131599 ssh_runner.go:319] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 23:54:15.714306  131599 ssh_runner.go:152] Run: openssl version
	I1019 23:54:15.719174  131599 ssh_runner.go:152] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6808.pem && ln -fs /usr/share/ca-certificates/6808.pem /etc/ssl/certs/6808.pem"
	I1019 23:54:15.727286  131599 ssh_runner.go:152] Run: ls -la /usr/share/ca-certificates/6808.pem
	I1019 23:54:15.730854  131599 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Oct 19 23:29 /usr/share/ca-certificates/6808.pem
	I1019 23:54:15.730905  131599 ssh_runner.go:152] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6808.pem
	I1019 23:54:15.735965  131599 ssh_runner.go:152] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6808.pem /etc/ssl/certs/51391683.0"
	I1019 23:54:15.742665  131599 ssh_runner.go:152] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/68082.pem && ln -fs /usr/share/ca-certificates/68082.pem /etc/ssl/certs/68082.pem"
	I1019 23:54:15.751364  131599 ssh_runner.go:152] Run: ls -la /usr/share/ca-certificates/68082.pem
	I1019 23:54:15.754678  131599 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Oct 19 23:29 /usr/share/ca-certificates/68082.pem
	I1019 23:54:15.754723  131599 ssh_runner.go:152] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/68082.pem
	I1019 23:54:15.760120  131599 ssh_runner.go:152] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/68082.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 23:54:15.766815  131599 ssh_runner.go:152] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 23:54:15.774162  131599 ssh_runner.go:152] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 23:54:15.777094  131599 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Oct 19 23:16 /usr/share/ca-certificates/minikubeCA.pem
	I1019 23:54:15.777141  131599 ssh_runner.go:152] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 23:54:15.782653  131599 ssh_runner.go:152] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 23:54:15.789538  131599 kubeadm.go:390] StartCluster: {Name:pause-20211019235243-6808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.27-1633027942-12081@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.2 ClusterName:pause-20211019235243-6808 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.2 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host}
	I1019 23:54:15.789632  131599 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1019 23:54:15.789678  131599 ssh_runner.go:152] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 23:54:15.819151  131599 cri.go:76] found id: "67717a61116e33321ebcbc26bb1066b02d77abca490421cdcda28484901dac38"
	I1019 23:54:15.819176  131599 cri.go:76] found id: "8e118a6817318c419866d57e4348e3019c5fabe15768ab92684ad1eb15588578"
	I1019 23:54:15.819184  131599 cri.go:76] found id: "5f570741dc1021db804ad8506cb914c8399cc863b2f79d81a1f5c9284ed89103"
	I1019 23:54:15.819192  131599 cri.go:76] found id: "c17f0c5af3c7c5c2a0f2ed0bf2a1d0654a71550afd751c6a0e7f736234423cef"
	I1019 23:54:15.819198  131599 cri.go:76] found id: "f7fd5d2b9867844d78c46d232049ec663270016db2809ec5dc7cff7b64ed42bb"
	I1019 23:54:15.819204  131599 cri.go:76] found id: "7ff3672f9c090d8f1094f7cedfac0961c84921065abf3237aa7088c62e868465"
	I1019 23:54:15.819210  131599 cri.go:76] found id: "5b9dd31e0a54a2ef258e3644d6aeb964a53c1c5cbea1f4c81d34ec48a98fd995"
	I1019 23:54:15.819224  131599 cri.go:76] found id: ""
	I1019 23:54:15.819267  131599 ssh_runner.go:152] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1019 23:54:15.860413  131599 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"2255d1c3ffe74e31933203aa85911eb6a1e390f13b31255cdc8b5db10b84b823","pid":1738,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2255d1c3ffe74e31933203aa85911eb6a1e390f13b31255cdc8b5db10b84b823","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2255d1c3ffe74e31933203aa85911eb6a1e390f13b31255cdc8b5db10b84b823/rootfs","created":"2021-10-19T23:53:51.968041229Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"2255d1c3ffe74e31933203aa85911eb6a1e390f13b31255cdc8b5db10b84b823","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-4h4jt_683931b6-6d3e-46c6-9939-7e6148247459"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"25169287aa5e57ecc19e6ec932c70c80d1a289f0b60ef6d629561d4c2229c466","pid":1021,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/25169287aa5e57ecc19e6ec932c70c80d1a289f
0b60ef6d629561d4c2229c466","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/25169287aa5e57ecc19e6ec932c70c80d1a289f0b60ef6d629561d4c2229c466/rootfs","created":"2021-10-19T23:53:18.879982422Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"25169287aa5e57ecc19e6ec932c70c80d1a289f0b60ef6d629561d4c2229c466","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-pause-20211019235243-6808_cde52a1a4ece8bd483c0e7444e3529e8"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2f0e26a91659a57d65733003789665a4e8dd2b0c4ecd756466878a407ef14c8b","pid":2010,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2f0e26a91659a57d65733003789665a4e8dd2b0c4ecd756466878a407ef14c8b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2f0e26a91659a57d65733003789665a4e8dd2b0c4ecd756466878a407ef14c8b/rootfs","created":"2021-10-19T23:53:59.57200209Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.
kubernetes.cri.sandbox-id":"2f0e26a91659a57d65733003789665a4e8dd2b0c4ecd756466878a407ef14c8b","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-78fcd69978-whpds_02502037-a01b-4cfb-a216-57c9601390a2"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"51bfb259838f91d436d05cde0f107d782d01b2a4b853169a55b418f6ece5b01e","pid":1013,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/51bfb259838f91d436d05cde0f107d782d01b2a4b853169a55b418f6ece5b01e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/51bfb259838f91d436d05cde0f107d782d01b2a4b853169a55b418f6ece5b01e/rootfs","created":"2021-10-19T23:53:18.880048379Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"51bfb259838f91d436d05cde0f107d782d01b2a4b853169a55b418f6ece5b01e","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-pause-20211019235243-6808_8569f8510d5237aa9eb0268e6d83b021"},"owner":"root"},{"ociVersion":"1.
0.2-dev","id":"5b9dd31e0a54a2ef258e3644d6aeb964a53c1c5cbea1f4c81d34ec48a98fd995","pid":1143,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5b9dd31e0a54a2ef258e3644d6aeb964a53c1c5cbea1f4c81d34ec48a98fd995","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5b9dd31e0a54a2ef258e3644d6aeb964a53c1c5cbea1f4c81d34ec48a98fd995/rootfs","created":"2021-10-19T23:53:19.18003269Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"b6cfc672745faa9120e23b412779ca56fa92433ee42268b3deb5fc5de35be6d5"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5f570741dc1021db804ad8506cb914c8399cc863b2f79d81a1f5c9284ed89103","pid":1778,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5f570741dc1021db804ad8506cb914c8399cc863b2f79d81a1f5c9284ed89103","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5f570741dc1021db804ad8506cb914c8399cc863b2f79d81a1f5c9284ed89103/ro
otfs","created":"2021-10-19T23:53:52.124094259Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"2255d1c3ffe74e31933203aa85911eb6a1e390f13b31255cdc8b5db10b84b823"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"67717a61116e33321ebcbc26bb1066b02d77abca490421cdcda28484901dac38","pid":2043,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/67717a61116e33321ebcbc26bb1066b02d77abca490421cdcda28484901dac38","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/67717a61116e33321ebcbc26bb1066b02d77abca490421cdcda28484901dac38/rootfs","created":"2021-10-19T23:53:59.824044588Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"2f0e26a91659a57d65733003789665a4e8dd2b0c4ecd756466878a407ef14c8b"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7ff3672f9c090d8f1094f7cedfac0961c84921065abf3237aa7088c62e86
8465","pid":1117,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7ff3672f9c090d8f1094f7cedfac0961c84921065abf3237aa7088c62e868465","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7ff3672f9c090d8f1094f7cedfac0961c84921065abf3237aa7088c62e868465/rootfs","created":"2021-10-19T23:53:19.120007913Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"995a9108c285470fcc638bc91ce0967c1f41cc346768ec67a0f9a3537a80e620"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8e118a6817318c419866d57e4348e3019c5fabe15768ab92684ad1eb15588578","pid":1864,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8e118a6817318c419866d57e4348e3019c5fabe15768ab92684ad1eb15588578","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8e118a6817318c419866d57e4348e3019c5fabe15768ab92684ad1eb15588578/rootfs","created":"2021-10-19T23:53:52.508005914Z","annotations":{
"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"f431cd5d317c1bd3cf53ba62b64e8707a6a18c18f3afc1aa01b0c49649113beb"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"995a9108c285470fcc638bc91ce0967c1f41cc346768ec67a0f9a3537a80e620","pid":1000,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/995a9108c285470fcc638bc91ce0967c1f41cc346768ec67a0f9a3537a80e620","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/995a9108c285470fcc638bc91ce0967c1f41cc346768ec67a0f9a3537a80e620/rootfs","created":"2021-10-19T23:53:18.879980395Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"995a9108c285470fcc638bc91ce0967c1f41cc346768ec67a0f9a3537a80e620","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-pause-20211019235243-6808_c04a89ac537d5c77f40d367f319785a8"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b6cfc672745faa9120e23b4127
79ca56fa92433ee42268b3deb5fc5de35be6d5","pid":998,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b6cfc672745faa9120e23b412779ca56fa92433ee42268b3deb5fc5de35be6d5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b6cfc672745faa9120e23b412779ca56fa92433ee42268b3deb5fc5de35be6d5/rootfs","created":"2021-10-19T23:53:18.879998404Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"b6cfc672745faa9120e23b412779ca56fa92433ee42268b3deb5fc5de35be6d5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-pause-20211019235243-6808_ee9b4e0bead94b40fae13e6ee192e478"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c17f0c5af3c7c5c2a0f2ed0bf2a1d0654a71550afd751c6a0e7f736234423cef","pid":1155,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c17f0c5af3c7c5c2a0f2ed0bf2a1d0654a71550afd751c6a0e7f736234423cef","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c17f0c5af3c7c5c2a0
f2ed0bf2a1d0654a71550afd751c6a0e7f736234423cef/rootfs","created":"2021-10-19T23:53:19.180054826Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"51bfb259838f91d436d05cde0f107d782d01b2a4b853169a55b418f6ece5b01e"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f431cd5d317c1bd3cf53ba62b64e8707a6a18c18f3afc1aa01b0c49649113beb","pid":1745,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f431cd5d317c1bd3cf53ba62b64e8707a6a18c18f3afc1aa01b0c49649113beb","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f431cd5d317c1bd3cf53ba62b64e8707a6a18c18f3afc1aa01b0c49649113beb/rootfs","created":"2021-10-19T23:53:52.156049702Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"f431cd5d317c1bd3cf53ba62b64e8707a6a18c18f3afc1aa01b0c49649113beb","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-hx6rh_3b1fccbe-8c
6c-4901-9b87-08a0c706c2ec"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f7fd5d2b9867844d78c46d232049ec663270016db2809ec5dc7cff7b64ed42bb","pid":1156,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f7fd5d2b9867844d78c46d232049ec663270016db2809ec5dc7cff7b64ed42bb","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f7fd5d2b9867844d78c46d232049ec663270016db2809ec5dc7cff7b64ed42bb/rootfs","created":"2021-10-19T23:53:19.180098798Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"25169287aa5e57ecc19e6ec932c70c80d1a289f0b60ef6d629561d4c2229c466"},"owner":"root"}]
	I1019 23:54:15.860638  131599 cri.go:113] list returned 14 containers
	I1019 23:54:15.860652  131599 cri.go:116] container: {ID:2255d1c3ffe74e31933203aa85911eb6a1e390f13b31255cdc8b5db10b84b823 Status:running}
	I1019 23:54:15.860664  131599 cri.go:118] skipping 2255d1c3ffe74e31933203aa85911eb6a1e390f13b31255cdc8b5db10b84b823 - not in ps
	I1019 23:54:15.860670  131599 cri.go:116] container: {ID:25169287aa5e57ecc19e6ec932c70c80d1a289f0b60ef6d629561d4c2229c466 Status:running}
	I1019 23:54:15.860677  131599 cri.go:118] skipping 25169287aa5e57ecc19e6ec932c70c80d1a289f0b60ef6d629561d4c2229c466 - not in ps
	I1019 23:54:15.860686  131599 cri.go:116] container: {ID:2f0e26a91659a57d65733003789665a4e8dd2b0c4ecd756466878a407ef14c8b Status:running}
	I1019 23:54:15.860693  131599 cri.go:118] skipping 2f0e26a91659a57d65733003789665a4e8dd2b0c4ecd756466878a407ef14c8b - not in ps
	I1019 23:54:15.860702  131599 cri.go:116] container: {ID:51bfb259838f91d436d05cde0f107d782d01b2a4b853169a55b418f6ece5b01e Status:running}
	I1019 23:54:15.860708  131599 cri.go:118] skipping 51bfb259838f91d436d05cde0f107d782d01b2a4b853169a55b418f6ece5b01e - not in ps
	I1019 23:54:15.860719  131599 cri.go:116] container: {ID:5b9dd31e0a54a2ef258e3644d6aeb964a53c1c5cbea1f4c81d34ec48a98fd995 Status:running}
	I1019 23:54:15.860729  131599 cri.go:122] skipping {5b9dd31e0a54a2ef258e3644d6aeb964a53c1c5cbea1f4c81d34ec48a98fd995 running}: state = "running", want "paused"
	I1019 23:54:15.860739  131599 cri.go:116] container: {ID:5f570741dc1021db804ad8506cb914c8399cc863b2f79d81a1f5c9284ed89103 Status:running}
	I1019 23:54:15.860746  131599 cri.go:122] skipping {5f570741dc1021db804ad8506cb914c8399cc863b2f79d81a1f5c9284ed89103 running}: state = "running", want "paused"
	I1019 23:54:15.860756  131599 cri.go:116] container: {ID:67717a61116e33321ebcbc26bb1066b02d77abca490421cdcda28484901dac38 Status:running}
	I1019 23:54:15.860764  131599 cri.go:122] skipping {67717a61116e33321ebcbc26bb1066b02d77abca490421cdcda28484901dac38 running}: state = "running", want "paused"
	I1019 23:54:15.860774  131599 cri.go:116] container: {ID:7ff3672f9c090d8f1094f7cedfac0961c84921065abf3237aa7088c62e868465 Status:running}
	I1019 23:54:15.860780  131599 cri.go:122] skipping {7ff3672f9c090d8f1094f7cedfac0961c84921065abf3237aa7088c62e868465 running}: state = "running", want "paused"
	I1019 23:54:15.860789  131599 cri.go:116] container: {ID:8e118a6817318c419866d57e4348e3019c5fabe15768ab92684ad1eb15588578 Status:running}
	I1019 23:54:15.860795  131599 cri.go:122] skipping {8e118a6817318c419866d57e4348e3019c5fabe15768ab92684ad1eb15588578 running}: state = "running", want "paused"
	I1019 23:54:15.860804  131599 cri.go:116] container: {ID:995a9108c285470fcc638bc91ce0967c1f41cc346768ec67a0f9a3537a80e620 Status:running}
	I1019 23:54:15.860810  131599 cri.go:118] skipping 995a9108c285470fcc638bc91ce0967c1f41cc346768ec67a0f9a3537a80e620 - not in ps
	I1019 23:54:15.860815  131599 cri.go:116] container: {ID:b6cfc672745faa9120e23b412779ca56fa92433ee42268b3deb5fc5de35be6d5 Status:running}
	I1019 23:54:15.860821  131599 cri.go:118] skipping b6cfc672745faa9120e23b412779ca56fa92433ee42268b3deb5fc5de35be6d5 - not in ps
	I1019 23:54:15.860825  131599 cri.go:116] container: {ID:c17f0c5af3c7c5c2a0f2ed0bf2a1d0654a71550afd751c6a0e7f736234423cef Status:running}
	I1019 23:54:15.860831  131599 cri.go:122] skipping {c17f0c5af3c7c5c2a0f2ed0bf2a1d0654a71550afd751c6a0e7f736234423cef running}: state = "running", want "paused"
	I1019 23:54:15.860837  131599 cri.go:116] container: {ID:f431cd5d317c1bd3cf53ba62b64e8707a6a18c18f3afc1aa01b0c49649113beb Status:running}
	I1019 23:54:15.860842  131599 cri.go:118] skipping f431cd5d317c1bd3cf53ba62b64e8707a6a18c18f3afc1aa01b0c49649113beb - not in ps
	I1019 23:54:15.860850  131599 cri.go:116] container: {ID:f7fd5d2b9867844d78c46d232049ec663270016db2809ec5dc7cff7b64ed42bb Status:running}
	I1019 23:54:15.860856  131599 cri.go:122] skipping {f7fd5d2b9867844d78c46d232049ec663270016db2809ec5dc7cff7b64ed42bb running}: state = "running", want "paused"
	I1019 23:54:15.860912  131599 ssh_runner.go:152] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 23:54:15.868814  131599 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I1019 23:54:15.868836  131599 kubeadm.go:600] restartCluster start
	I1019 23:54:15.868896  131599 ssh_runner.go:152] Run: sudo test -d /data/minikube
	I1019 23:54:15.876870  131599 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1019 23:54:15.877916  131599 kubeconfig.go:92] found "pause-20211019235243-6808" server: "https://192.168.49.2:8443"
	I1019 23:54:15.878698  131599 kapi.go:59] client config for pause-20211019235243-6808: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/pause-20211019235243-6808/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/pause-20211019235243-6808
/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x158bd60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1019 23:54:15.880845  131599 ssh_runner.go:152] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1019 23:54:15.898610  131599 api_server.go:165] Checking apiserver status ...
	I1019 23:54:15.898658  131599 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 23:54:15.918340  131599 ssh_runner.go:152] Run: sudo egrep ^[0-9]+:freezer: /proc/1156/cgroup
	I1019 23:54:15.928429  131599 api_server.go:181] apiserver freezer: "4:freezer:/docker/ca64042ca0a992e0db24bbcb8efadad8b86c99ff454893a54a4f418cfbccd996/kubepods/burstable/podcde52a1a4ece8bd483c0e7444e3529e8/f7fd5d2b9867844d78c46d232049ec663270016db2809ec5dc7cff7b64ed42bb"
	I1019 23:54:15.928485  131599 ssh_runner.go:152] Run: sudo cat /sys/fs/cgroup/freezer/docker/ca64042ca0a992e0db24bbcb8efadad8b86c99ff454893a54a4f418cfbccd996/kubepods/burstable/podcde52a1a4ece8bd483c0e7444e3529e8/f7fd5d2b9867844d78c46d232049ec663270016db2809ec5dc7cff7b64ed42bb/freezer.state
	I1019 23:54:15.935224  131599 api_server.go:203] freezer state: "THAWED"
	I1019 23:54:15.935256  131599 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1019 23:54:15.940276  131599 api_server.go:266] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1019 23:54:15.955635  131599 system_pods.go:86] 7 kube-system pods found
	I1019 23:54:15.955665  131599 system_pods.go:89] "coredns-78fcd69978-whpds" [02502037-a01b-4cfb-a216-57c9601390a2] Running
	I1019 23:54:15.955677  131599 system_pods.go:89] "etcd-pause-20211019235243-6808" [7637c006-af0b-45a0-aae1-b41320e8d969] Running
	I1019 23:54:15.955684  131599 system_pods.go:89] "kindnet-hx6rh" [3b1fccbe-8c6c-4901-9b87-08a0c706c2ec] Running
	I1019 23:54:15.955690  131599 system_pods.go:89] "kube-apiserver-pause-20211019235243-6808" [09994dd7-2259-4625-8119-0a7f4c637084] Running
	I1019 23:54:15.955697  131599 system_pods.go:89] "kube-controller-manager-pause-20211019235243-6808" [f6e6cc7c-e2ec-4adf-b950-e809653527da] Running
	I1019 23:54:15.955703  131599 system_pods.go:89] "kube-proxy-4h4jt" [683931b6-6d3e-46c6-9939-7e6148247459] Running
	I1019 23:54:15.955710  131599 system_pods.go:89] "kube-scheduler-pause-20211019235243-6808" [c037427f-83e0-4c17-b129-bbaaa7d63ff6] Running
	I1019 23:54:15.956465  131599 api_server.go:140] control plane version: v1.22.2
	I1019 23:54:15.956485  131599 kubeadm.go:594] The running cluster does not require reconfiguration: 192.168.49.2
	I1019 23:54:15.956491  131599 kubeadm.go:647] Taking a shortcut, as the cluster seems to be properly configured
	I1019 23:54:15.956495  131599 kubeadm.go:604] restartCluster took 87.654104ms
	I1019 23:54:15.956500  131599 kubeadm.go:392] StartCluster complete in 166.968227ms
	I1019 23:54:15.956511  131599 settings.go:142] acquiring lock: {Name:mkae5b2371003bb83973886295163ea24b8b77a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 23:54:15.956575  131599 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/kubeconfig
	I1019 23:54:15.957658  131599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/kubeconfig: {Name:mk397e230cae60dd4f11434c0167f3868cd3be72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 23:54:15.958624  131599 kapi.go:59] client config for pause-20211019235243-6808: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/pause-20211019235243-6808/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/pause-20211019235243-6808
/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x158bd60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1019 23:54:15.962274  131599 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-20211019235243-6808" rescaled to 1
	I1019 23:54:15.962335  131599 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.2 ControlPlane:true Worker:true}
	I1019 23:54:15.962345  131599 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1019 23:54:15.965049  131599 out.go:176] * Verifying Kubernetes components...
	I1019 23:54:15.962579  131599 config.go:176] Loaded profile config "pause-20211019235243-6808": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.2
	I1019 23:54:15.965110  131599 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
	I1019 23:54:15.962595  131599 addons.go:404] enableAddons start: toEnable=map[], additional=[]
	I1019 23:54:15.965187  131599 addons.go:65] Setting storage-provisioner=true in profile "pause-20211019235243-6808"
	I1019 23:54:15.965201  131599 addons.go:65] Setting default-storageclass=true in profile "pause-20211019235243-6808"
	I1019 23:54:15.965207  131599 addons.go:153] Setting addon storage-provisioner=true in "pause-20211019235243-6808"
	W1019 23:54:15.965214  131599 addons.go:165] addon storage-provisioner should already be in state true
	I1019 23:54:15.965214  131599 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-20211019235243-6808"
	I1019 23:54:15.965238  131599 host.go:66] Checking if "pause-20211019235243-6808" exists ...
	I1019 23:54:15.965524  131599 cli_runner.go:115] Run: docker container inspect pause-20211019235243-6808 --format={{.State.Status}}
	I1019 23:54:15.965670  131599 cli_runner.go:115] Run: docker container inspect pause-20211019235243-6808 --format={{.State.Status}}
	I1019 23:54:16.028816  131599 kapi.go:59] client config for pause-20211019235243-6808: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/pause-20211019235243-6808/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/pause-20211019235243-6808
/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x158bd60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1019 23:54:16.033771  131599 addons.go:153] Setting addon default-storageclass=true in "pause-20211019235243-6808"
	W1019 23:54:16.033806  131599 addons.go:165] addon default-storageclass should already be in state true
	I1019 23:54:16.033834  131599 host.go:66] Checking if "pause-20211019235243-6808" exists ...
	I1019 23:54:16.034349  131599 cli_runner.go:115] Run: docker container inspect pause-20211019235243-6808 --format={{.State.Status}}
	I1019 23:54:16.040199  131599 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 23:54:16.040322  131599 addons.go:337] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 23:54:16.040340  131599 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 23:54:16.040389  131599 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211019235243-6808
	I1019 23:54:16.050487  131599 node_ready.go:35] waiting up to 6m0s for node "pause-20211019235243-6808" to be "Ready" ...
	I1019 23:54:16.050861  131599 start.go:709] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1019 23:54:16.055379  131599 node_ready.go:49] node "pause-20211019235243-6808" has status "Ready":"True"
	I1019 23:54:16.055405  131599 node_ready.go:38] duration metric: took 4.890693ms waiting for node "pause-20211019235243-6808" to be "Ready" ...
	I1019 23:54:16.055417  131599 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1019 23:54:16.060621  131599 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-whpds" in "kube-system" namespace to be "Ready" ...
	I1019 23:54:16.072271  131599 pod_ready.go:92] pod "coredns-78fcd69978-whpds" in "kube-system" namespace has status "Ready":"True"
	I1019 23:54:16.072288  131599 pod_ready.go:81] duration metric: took 11.646168ms waiting for pod "coredns-78fcd69978-whpds" in "kube-system" namespace to be "Ready" ...
	I1019 23:54:16.072301  131599 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-20211019235243-6808" in "kube-system" namespace to be "Ready" ...
	I1019 23:54:16.076549  131599 pod_ready.go:92] pod "etcd-pause-20211019235243-6808" in "kube-system" namespace has status "Ready":"True"
	I1019 23:54:16.076567  131599 pod_ready.go:81] duration metric: took 4.258511ms waiting for pod "etcd-pause-20211019235243-6808" in "kube-system" namespace to be "Ready" ...
	I1019 23:54:16.076581  131599 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-20211019235243-6808" in "kube-system" namespace to be "Ready" ...
	I1019 23:54:16.080759  131599 pod_ready.go:92] pod "kube-apiserver-pause-20211019235243-6808" in "kube-system" namespace has status "Ready":"True"
	I1019 23:54:16.080772  131599 pod_ready.go:81] duration metric: took 4.184833ms waiting for pod "kube-apiserver-pause-20211019235243-6808" in "kube-system" namespace to be "Ready" ...
	I1019 23:54:16.080783  131599 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-20211019235243-6808" in "kube-system" namespace to be "Ready" ...
	I1019 23:54:16.093746  131599 addons.go:337] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 23:54:16.093764  131599 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 23:54:16.093812  131599 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211019235243-6808
	I1019 23:54:16.096030  131599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32887 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/pause-20211019235243-6808/id_rsa Username:docker}
	I1019 23:54:16.143283  131599 pod_ready.go:92] pod "kube-controller-manager-pause-20211019235243-6808" in "kube-system" namespace has status "Ready":"True"
	I1019 23:54:16.143307  131599 pod_ready.go:81] duration metric: took 62.510094ms waiting for pod "kube-controller-manager-pause-20211019235243-6808" in "kube-system" namespace to be "Ready" ...
	I1019 23:54:16.143317  131599 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4h4jt" in "kube-system" namespace to be "Ready" ...
	I1019 23:54:16.146360  131599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32887 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/pause-20211019235243-6808/id_rsa Username:docker}
	I1019 23:54:16.189054  131599 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 23:54:16.251105  131599 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 23:54:13.261229  117106 node_ready.go:58] node "offline-containerd-20211019235243-6808" has status "Ready":"False"
	I1019 23:54:14.261157  117106 node_ready.go:49] node "offline-containerd-20211019235243-6808" has status "Ready":"True"
	I1019 23:54:14.261180  117106 node_ready.go:38] duration metric: took 12.00964179s waiting for node "offline-containerd-20211019235243-6808" to be "Ready" ...
	I1019 23:54:14.261188  117106 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1019 23:54:14.271301  117106 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-bxnms" in "kube-system" namespace to be "Ready" ...
	I1019 23:54:15.285880  117106 pod_ready.go:92] pod "coredns-78fcd69978-bxnms" in "kube-system" namespace has status "Ready":"True"
	I1019 23:54:15.285908  117106 pod_ready.go:81] duration metric: took 1.014571262s waiting for pod "coredns-78fcd69978-bxnms" in "kube-system" namespace to be "Ready" ...
	I1019 23:54:15.285922  117106 pod_ready.go:78] waiting up to 6m0s for pod "etcd-offline-containerd-20211019235243-6808" in "kube-system" namespace to be "Ready" ...
	I1019 23:54:15.290317  117106 pod_ready.go:92] pod "etcd-offline-containerd-20211019235243-6808" in "kube-system" namespace has status "Ready":"True"
	I1019 23:54:15.290334  117106 pod_ready.go:81] duration metric: took 4.404012ms waiting for pod "etcd-offline-containerd-20211019235243-6808" in "kube-system" namespace to be "Ready" ...
	I1019 23:54:15.290345  117106 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-offline-containerd-20211019235243-6808" in "kube-system" namespace to be "Ready" ...
	I1019 23:54:15.294550  117106 pod_ready.go:92] pod "kube-apiserver-offline-containerd-20211019235243-6808" in "kube-system" namespace has status "Ready":"True"
	I1019 23:54:15.294564  117106 pod_ready.go:81] duration metric: took 4.214496ms waiting for pod "kube-apiserver-offline-containerd-20211019235243-6808" in "kube-system" namespace to be "Ready" ...
	I1019 23:54:15.294572  117106 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-offline-containerd-20211019235243-6808" in "kube-system" namespace to be "Ready" ...
	I1019 23:54:15.298910  117106 pod_ready.go:92] pod "kube-controller-manager-offline-containerd-20211019235243-6808" in "kube-system" namespace has status "Ready":"True"
	I1019 23:54:15.298928  117106 pod_ready.go:81] duration metric: took 4.349433ms waiting for pod "kube-controller-manager-offline-containerd-20211019235243-6808" in "kube-system" namespace to be "Ready" ...
	I1019 23:54:15.298940  117106 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zsv69" in "kube-system" namespace to be "Ready" ...
	I1019 23:54:15.461331  117106 pod_ready.go:92] pod "kube-proxy-zsv69" in "kube-system" namespace has status "Ready":"True"
	I1019 23:54:15.461351  117106 pod_ready.go:81] duration metric: took 162.403387ms waiting for pod "kube-proxy-zsv69" in "kube-system" namespace to be "Ready" ...
	I1019 23:54:15.461363  117106 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-offline-containerd-20211019235243-6808" in "kube-system" namespace to be "Ready" ...
	I1019 23:54:15.862092  117106 pod_ready.go:92] pod "kube-scheduler-offline-containerd-20211019235243-6808" in "kube-system" namespace has status "Ready":"True"
	I1019 23:54:15.862110  117106 pod_ready.go:81] duration metric: took 400.739561ms waiting for pod "kube-scheduler-offline-containerd-20211019235243-6808" in "kube-system" namespace to be "Ready" ...
	I1019 23:54:15.862124  117106 pod_ready.go:38] duration metric: took 1.6009255s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1019 23:54:15.862145  117106 api_server.go:51] waiting for apiserver process to appear ...
	I1019 23:54:15.862188  117106 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 23:54:15.888635  117106 api_server.go:71] duration metric: took 13.767042079s to wait for apiserver process to appear ...
	I1019 23:54:15.888664  117106 api_server.go:87] waiting for apiserver healthz status ...
	I1019 23:54:15.888675  117106 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1019 23:54:15.899151  117106 api_server.go:266] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1019 23:54:15.900236  117106 api_server.go:140] control plane version: v1.22.2
	I1019 23:54:15.900257  117106 api_server.go:130] duration metric: took 11.587018ms to wait for apiserver health ...
	I1019 23:54:15.900267  117106 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 23:54:16.064760  117106 system_pods.go:59] 8 kube-system pods found
	I1019 23:54:16.064800  117106 system_pods.go:61] "coredns-78fcd69978-bxnms" [9b47f213-45cb-4a75-a037-0b6f1818bb33] Running
	I1019 23:54:16.064809  117106 system_pods.go:61] "etcd-offline-containerd-20211019235243-6808" [a61d2d49-7f6d-43f5-91b0-44c47939a3ef] Running
	I1019 23:54:16.064815  117106 system_pods.go:61] "kindnet-mmcdk" [ee046c13-7efc-4f30-95c0-985b75892f8b] Running
	I1019 23:54:16.064822  117106 system_pods.go:61] "kube-apiserver-offline-containerd-20211019235243-6808" [89251d86-59ee-463b-937a-8062765cdab7] Running
	I1019 23:54:16.064828  117106 system_pods.go:61] "kube-controller-manager-offline-containerd-20211019235243-6808" [4adfb615-9b91-491e-ac2b-e4b79e9cab6e] Running
	I1019 23:54:16.064841  117106 system_pods.go:61] "kube-proxy-zsv69" [3ed76fa6-19e5-471f-9859-eb232abd50f8] Running
	I1019 23:54:16.064847  117106 system_pods.go:61] "kube-scheduler-offline-containerd-20211019235243-6808" [edcb7e4a-bec3-4558-bf2e-40ddab504f9f] Running
	I1019 23:54:16.064852  117106 system_pods.go:61] "storage-provisioner" [f25fd246-33db-4fab-9141-2bbcaea12712] Running
	I1019 23:54:16.064859  117106 system_pods.go:74] duration metric: took 164.58645ms to wait for pod list to return data ...
	I1019 23:54:16.064867  117106 default_sa.go:34] waiting for default service account to be created ...
	I1019 23:54:16.261647  117106 default_sa.go:45] found service account: "default"
	I1019 23:54:16.261672  117106 default_sa.go:55] duration metric: took 196.799291ms for default service account to be created ...
	I1019 23:54:16.261683  117106 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 23:54:16.465351  117106 system_pods.go:86] 8 kube-system pods found
	I1019 23:54:16.465379  117106 system_pods.go:89] "coredns-78fcd69978-bxnms" [9b47f213-45cb-4a75-a037-0b6f1818bb33] Running
	I1019 23:54:16.465386  117106 system_pods.go:89] "etcd-offline-containerd-20211019235243-6808" [a61d2d49-7f6d-43f5-91b0-44c47939a3ef] Running
	I1019 23:54:16.465392  117106 system_pods.go:89] "kindnet-mmcdk" [ee046c13-7efc-4f30-95c0-985b75892f8b] Running
	I1019 23:54:16.465400  117106 system_pods.go:89] "kube-apiserver-offline-containerd-20211019235243-6808" [89251d86-59ee-463b-937a-8062765cdab7] Running
	I1019 23:54:16.465408  117106 system_pods.go:89] "kube-controller-manager-offline-containerd-20211019235243-6808" [4adfb615-9b91-491e-ac2b-e4b79e9cab6e] Running
	I1019 23:54:16.465418  117106 system_pods.go:89] "kube-proxy-zsv69" [3ed76fa6-19e5-471f-9859-eb232abd50f8] Running
	I1019 23:54:16.465425  117106 system_pods.go:89] "kube-scheduler-offline-containerd-20211019235243-6808" [edcb7e4a-bec3-4558-bf2e-40ddab504f9f] Running
	I1019 23:54:16.465436  117106 system_pods.go:89] "storage-provisioner" [f25fd246-33db-4fab-9141-2bbcaea12712] Running
	I1019 23:54:16.465443  117106 system_pods.go:126] duration metric: took 203.753987ms to wait for k8s-apps to be running ...
	I1019 23:54:16.465453  117106 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 23:54:16.465499  117106 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
	I1019 23:54:16.477078  117106 system_svc.go:56] duration metric: took 11.617743ms WaitForService to wait for kubelet.
	I1019 23:54:16.477105  117106 kubeadm.go:547] duration metric: took 14.355515345s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1019 23:54:16.477134  117106 node_conditions.go:102] verifying NodePressure condition ...
	I1019 23:54:16.662335  117106 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I1019 23:54:16.662363  117106 node_conditions.go:123] node cpu capacity is 8
	I1019 23:54:16.662376  117106 node_conditions.go:105] duration metric: took 185.237055ms to run NodePressure ...
	I1019 23:54:16.662389  117106 start.go:231] waiting for startup goroutines ...
	I1019 23:54:16.711707  117106 start.go:462] kubectl: 1.20.5, cluster: 1.22.2 (minor skew: 2)
	I1019 23:54:16.714005  117106 out.go:176] 
	W1019 23:54:16.714217  117106 out.go:241] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.2.
	I1019 23:54:16.715644  117106 out.go:176]   - Want kubectl v1.22.2? Try 'minikube kubectl -- get pods -A'
	I1019 23:54:16.717070  117106 out.go:176] * Done! kubectl is now configured to use "offline-containerd-20211019235243-6808" cluster and "default" namespace by default
	I1019 23:54:16.464498  131599 out.go:176] * Enabled addons: storage-provisioner, default-storageclass
	I1019 23:54:16.464581  131599 addons.go:406] enableAddons completed in 501.98877ms
	I1019 23:54:16.543738  131599 pod_ready.go:92] pod "kube-proxy-4h4jt" in "kube-system" namespace has status "Ready":"True"
	I1019 23:54:16.543757  131599 pod_ready.go:81] duration metric: took 400.43464ms waiting for pod "kube-proxy-4h4jt" in "kube-system" namespace to be "Ready" ...
	I1019 23:54:16.543766  131599 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-20211019235243-6808" in "kube-system" namespace to be "Ready" ...
	I1019 23:54:16.944010  131599 pod_ready.go:92] pod "kube-scheduler-pause-20211019235243-6808" in "kube-system" namespace has status "Ready":"True"
	I1019 23:54:16.944028  131599 pod_ready.go:81] duration metric: took 400.255239ms waiting for pod "kube-scheduler-pause-20211019235243-6808" in "kube-system" namespace to be "Ready" ...
	I1019 23:54:16.944037  131599 pod_ready.go:38] duration metric: took 888.603337ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1019 23:54:16.944057  131599 api_server.go:51] waiting for apiserver process to appear ...
	I1019 23:54:16.944096  131599 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 23:54:16.965391  131599 api_server.go:71] duration metric: took 1.003024906s to wait for apiserver process to appear ...
	I1019 23:54:16.965415  131599 api_server.go:87] waiting for apiserver healthz status ...
	I1019 23:54:16.965426  131599 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1019 23:54:16.971709  131599 api_server.go:266] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1019 23:54:16.972645  131599 api_server.go:140] control plane version: v1.22.2
	I1019 23:54:16.972668  131599 api_server.go:130] duration metric: took 7.24751ms to wait for apiserver health ...
	I1019 23:54:16.972678  131599 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 23:54:17.146716  131599 system_pods.go:59] 8 kube-system pods found
	I1019 23:54:17.146743  131599 system_pods.go:61] "coredns-78fcd69978-whpds" [02502037-a01b-4cfb-a216-57c9601390a2] Running
	I1019 23:54:17.146748  131599 system_pods.go:61] "etcd-pause-20211019235243-6808" [7637c006-af0b-45a0-aae1-b41320e8d969] Running
	I1019 23:54:17.146752  131599 system_pods.go:61] "kindnet-hx6rh" [3b1fccbe-8c6c-4901-9b87-08a0c706c2ec] Running
	I1019 23:54:17.146756  131599 system_pods.go:61] "kube-apiserver-pause-20211019235243-6808" [09994dd7-2259-4625-8119-0a7f4c637084] Running
	I1019 23:54:17.146760  131599 system_pods.go:61] "kube-controller-manager-pause-20211019235243-6808" [f6e6cc7c-e2ec-4adf-b950-e809653527da] Running
	I1019 23:54:17.146764  131599 system_pods.go:61] "kube-proxy-4h4jt" [683931b6-6d3e-46c6-9939-7e6148247459] Running
	I1019 23:54:17.146767  131599 system_pods.go:61] "kube-scheduler-pause-20211019235243-6808" [c037427f-83e0-4c17-b129-bbaaa7d63ff6] Running
	I1019 23:54:17.146773  131599 system_pods.go:61] "storage-provisioner" [7556b1b7-1180-43ef-8ff0-ee10aaa82bfe] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 23:54:17.146779  131599 system_pods.go:74] duration metric: took 174.095054ms to wait for pod list to return data ...
	I1019 23:54:17.146791  131599 default_sa.go:34] waiting for default service account to be created ...
	I1019 23:54:17.345572  131599 default_sa.go:45] found service account: "default"
	I1019 23:54:17.345597  131599 default_sa.go:55] duration metric: took 198.800433ms for default service account to be created ...
	I1019 23:54:17.345609  131599 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 23:54:17.546345  131599 system_pods.go:86] 8 kube-system pods found
	I1019 23:54:17.546372  131599 system_pods.go:89] "coredns-78fcd69978-whpds" [02502037-a01b-4cfb-a216-57c9601390a2] Running
	I1019 23:54:17.546377  131599 system_pods.go:89] "etcd-pause-20211019235243-6808" [7637c006-af0b-45a0-aae1-b41320e8d969] Running
	I1019 23:54:17.546381  131599 system_pods.go:89] "kindnet-hx6rh" [3b1fccbe-8c6c-4901-9b87-08a0c706c2ec] Running
	I1019 23:54:17.546385  131599 system_pods.go:89] "kube-apiserver-pause-20211019235243-6808" [09994dd7-2259-4625-8119-0a7f4c637084] Running
	I1019 23:54:17.546389  131599 system_pods.go:89] "kube-controller-manager-pause-20211019235243-6808" [f6e6cc7c-e2ec-4adf-b950-e809653527da] Running
	I1019 23:54:17.546393  131599 system_pods.go:89] "kube-proxy-4h4jt" [683931b6-6d3e-46c6-9939-7e6148247459] Running
	I1019 23:54:17.546399  131599 system_pods.go:89] "kube-scheduler-pause-20211019235243-6808" [c037427f-83e0-4c17-b129-bbaaa7d63ff6] Running
	I1019 23:54:17.546408  131599 system_pods.go:89] "storage-provisioner" [7556b1b7-1180-43ef-8ff0-ee10aaa82bfe] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 23:54:17.546423  131599 system_pods.go:126] duration metric: took 200.808466ms to wait for k8s-apps to be running ...
	I1019 23:54:17.546437  131599 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 23:54:17.546479  131599 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
	I1019 23:54:17.557674  131599 system_svc.go:56] duration metric: took 11.229614ms WaitForService to wait for kubelet.
	I1019 23:54:17.557699  131599 kubeadm.go:547] duration metric: took 1.595339233s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1019 23:54:17.557722  131599 node_conditions.go:102] verifying NodePressure condition ...
	I1019 23:54:17.744839  131599 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I1019 23:54:17.744866  131599 node_conditions.go:123] node cpu capacity is 8
	I1019 23:54:17.744884  131599 node_conditions.go:105] duration metric: took 187.1566ms to run NodePressure ...
	I1019 23:54:17.744897  131599 start.go:231] waiting for startup goroutines ...
	I1019 23:54:17.797138  131599 start.go:462] kubectl: 1.20.5, cluster: 1.22.2 (minor skew: 2)
	I1019 23:54:17.800754  131599 out.go:176] 
	W1019 23:54:17.800947  131599 out.go:241] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.2.
	I1019 23:54:17.802621  131599 out.go:176]   - Want kubectl v1.22.2? Try 'minikube kubectl -- get pods -A'
	I1019 23:54:17.804281  131599 out.go:176] * Done! kubectl is now configured to use "pause-20211019235243-6808" cluster and "default" namespace by default
	I1019 23:54:15.334817  127260 ssh_runner.go:192] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.20.0: (1.957632253s)
	I1019 23:54:15.334845  127260 cache_images.go:310] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.20.0 from cache
	I1019 23:54:15.334867  127260 containerd.go:305] Loading image: /var/lib/minikube/images/dashboard_v2.3.1
	I1019 23:54:15.334921  127260 ssh_runner.go:152] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/dashboard_v2.3.1
	I1019 23:54:18.318958  127260 ssh_runner.go:192] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/dashboard_v2.3.1: (2.984011683s)
	I1019 23:54:18.319037  127260 cache_images.go:310] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1 from cache
	I1019 23:54:18.319062  127260 containerd.go:305] Loading image: /var/lib/minikube/images/etcd_3.4.13-0
	I1019 23:54:18.319113  127260 ssh_runner.go:152] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.4.13-0
	I1019 23:54:19.783743  127260 ssh_runner.go:192] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.4.13-0: (1.464601338s)
	I1019 23:54:19.783771  127260 cache_images.go:310] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-0 from cache
	I1019 23:54:19.783796  127260 cache_images.go:118] Successfully loaded all cached images
	I1019 23:54:19.783807  127260 cache_images.go:87] LoadImages completed in 11.846621853s
	I1019 23:54:19.783888  127260 ssh_runner.go:152] Run: sudo crictl info
	I1019 23:54:19.801804  127260 cni.go:93] Creating CNI manager for ""
	I1019 23:54:19.801827  127260 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1019 23:54:19.801840  127260 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1019 23:54:19.801858  127260 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.59.153 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-20211019235243-6808 NodeName:stopped-upgrade-20211019235243-6808 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.59.153"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.59.153 CgroupDriver:cg
roupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I1019 23:54:19.802019  127260 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.59.153
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "stopped-upgrade-20211019235243-6808"
	  kubeletExtraArgs:
	    node-ip: 192.168.59.153
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.59.153"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 23:54:19.802132  127260 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=stopped-upgrade-20211019235243-6808 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.59.153 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:stopped-upgrade-20211019235243-6808 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1019 23:54:19.802185  127260 ssh_runner.go:152] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1019 23:54:19.811640  127260 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 23:54:19.811725  127260 ssh_runner.go:152] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 23:54:19.820553  127260 ssh_runner.go:319] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (582 bytes)
	I1019 23:54:19.836243  127260 ssh_runner.go:319] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 23:54:19.852897  127260 ssh_runner.go:319] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2091 bytes)
	I1019 23:54:19.866654  127260 ssh_runner.go:152] Run: grep 192.168.59.153	control-plane.minikube.internal$ /etc/hosts
	I1019 23:54:19.869714  127260 ssh_runner.go:152] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.59.153	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 23:54:19.879435  127260 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/stopped-upgrade-20211019235243-6808 for IP: 192.168.59.153
	I1019 23:54:19.879538  127260 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/ca.key
	I1019 23:54:19.879572  127260 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/proxy-client-ca.key
	I1019 23:54:19.879667  127260 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/stopped-upgrade-20211019235243-6808/client.key
	I1019 23:54:19.879715  127260 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/stopped-upgrade-20211019235243-6808/apiserver.key.3204a66d
	I1019 23:54:19.879747  127260 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/stopped-upgrade-20211019235243-6808/proxy-client.key
	I1019 23:54:19.879875  127260 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/6808.pem (1338 bytes)
	W1019 23:54:19.879910  127260 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/6808_empty.pem, impossibly tiny 0 bytes
	I1019 23:54:19.879922  127260 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/ca-key.pem (1679 bytes)
	I1019 23:54:19.879957  127260 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/ca.pem (1082 bytes)
	I1019 23:54:19.879995  127260 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/cert.pem (1123 bytes)
	I1019 23:54:19.880022  127260 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/key.pem (1679 bytes)
	I1019 23:54:19.880067  127260 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/files/etc/ssl/certs/68082.pem (1708 bytes)
	I1019 23:54:19.881161  127260 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/stopped-upgrade-20211019235243-6808/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1019 23:54:19.899562  127260 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/stopped-upgrade-20211019235243-6808/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1019 23:54:19.918281  127260 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/stopped-upgrade-20211019235243-6808/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 23:54:19.936932  127260 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/stopped-upgrade-20211019235243-6808/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1019 23:54:19.963141  127260 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 23:54:19.980541  127260 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1019 23:54:19.998477  127260 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 23:54:20.015697  127260 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1019 23:54:20.032921  127260 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/files/etc/ssl/certs/68082.pem --> /usr/share/ca-certificates/68082.pem (1708 bytes)
	I1019 23:54:20.052845  127260 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 23:54:20.070596  127260 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/6808.pem --> /usr/share/ca-certificates/6808.pem (1338 bytes)
	I1019 23:54:20.087967  127260 ssh_runner.go:319] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 23:54:20.100496  127260 ssh_runner.go:152] Run: openssl version
	I1019 23:54:20.105427  127260 ssh_runner.go:152] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6808.pem && ln -fs /usr/share/ca-certificates/6808.pem /etc/ssl/certs/6808.pem"
	I1019 23:54:20.112856  127260 ssh_runner.go:152] Run: ls -la /usr/share/ca-certificates/6808.pem
	I1019 23:54:20.116019  127260 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Oct 19 23:29 /usr/share/ca-certificates/6808.pem
	I1019 23:54:20.116064  127260 ssh_runner.go:152] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6808.pem
	I1019 23:54:20.120910  127260 ssh_runner.go:152] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6808.pem /etc/ssl/certs/51391683.0"
	I1019 23:54:20.127609  127260 ssh_runner.go:152] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/68082.pem && ln -fs /usr/share/ca-certificates/68082.pem /etc/ssl/certs/68082.pem"
	I1019 23:54:20.134971  127260 ssh_runner.go:152] Run: ls -la /usr/share/ca-certificates/68082.pem
	I1019 23:54:20.138588  127260 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Oct 19 23:29 /usr/share/ca-certificates/68082.pem
	I1019 23:54:20.138635  127260 ssh_runner.go:152] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/68082.pem
	I1019 23:54:20.144587  127260 ssh_runner.go:152] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/68082.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 23:54:20.154077  127260 ssh_runner.go:152] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 23:54:20.163923  127260 ssh_runner.go:152] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 23:54:20.167722  127260 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Oct 19 23:16 /usr/share/ca-certificates/minikubeCA.pem
	I1019 23:54:20.167769  127260 ssh_runner.go:152] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 23:54:20.176372  127260 ssh_runner.go:152] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 23:54:20.185020  127260 kubeadm.go:390] StartCluster: {Name:stopped-upgrade-20211019235243-6808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:stopped-upgrade-20211019235243-6808 Namespace:default APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.153 Port:8443 KubernetesVersion:v1.20.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:}
	I1019 23:54:20.185113  127260 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1019 23:54:20.185158  127260 ssh_runner.go:152] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 23:54:20.205249  127260 cri.go:76] found id: "b44816aceafff6e2c1dc7f3c5fdf67040bdda6af8ce411d3c2b91e57f4c1c301"
	I1019 23:54:20.205270  127260 cri.go:76] found id: "d906dad177d3793a7422e18ab831675903f8bc71452938fd6bebec707cc760ea"
	I1019 23:54:20.205278  127260 cri.go:76] found id: "4b2f253e8403b5b5bc40143fc51dc2554c5c39e05a4ca06c5532295cf5b389f7"
	I1019 23:54:20.205286  127260 cri.go:76] found id: "b0f4cdba330bf407993f1cc0ed15223626f085ddd413253b0f460b726317d836"
	I1019 23:54:20.205293  127260 cri.go:76] found id: ""
	I1019 23:54:20.205333  127260 ssh_runner.go:152] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1019 23:54:20.225332  127260 cri.go:103] JSON = null
	W1019 23:54:20.225381  127260 kubeadm.go:397] unpause failed: list paused: list returned 0 containers, but ps returned 4
	I1019 23:54:20.225434  127260 ssh_runner.go:152] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 23:54:20.238004  127260 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I1019 23:54:20.238024  127260 kubeadm.go:600] restartCluster start
	I1019 23:54:20.238069  127260 ssh_runner.go:152] Run: sudo test -d /data/minikube
	I1019 23:54:20.245620  127260 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1019 23:54:20.246381  127260 kubeconfig.go:116] verify returned: extract IP: "stopped-upgrade-20211019235243-6808" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/kubeconfig
	I1019 23:54:20.246608  127260 kubeconfig.go:127] "stopped-upgrade-20211019235243-6808" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/kubeconfig - will repair!
	I1019 23:54:20.247034  127260 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/kubeconfig: {Name:mk397e230cae60dd4f11434c0167f3868cd3be72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 23:54:20.247750  127260 kapi.go:59] client config for stopped-upgrade-20211019235243-6808: &rest.Config{Host:"https://192.168.59.153:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/stopped-upgrade-20211019235243-6808/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/sto
pped-upgrade-20211019235243-6808/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x158bd60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1019 23:54:20.249296  127260 ssh_runner.go:152] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1019 23:54:20.259751  127260 kubeadm.go:568] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2021-10-19 23:53:18.528524095 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2021-10-19 23:54:19.861114597 +0000
	@@ -65,4 +65,10 @@
	 apiVersion: kubeproxy.config.k8s.io/v1alpha1
	 kind: KubeProxyConfiguration
	 clusterCIDR: "10.244.0.0/16"
	-metricsBindAddress: 192.168.59.153:10249
	+metricsBindAddress: 0.0.0.0:10249
	+conntrack:
	+  maxPerCore: 0
	+# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	+  tcpEstablishedTimeout: 0s
	+# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	+  tcpCloseWaitTimeout: 0s
	
	-- /stdout --
	I1019 23:54:20.259769  127260 kubeadm.go:1032] stopping kube-system containers ...
	I1019 23:54:20.259780  127260 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1019 23:54:20.259841  127260 ssh_runner.go:152] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 23:54:20.276873  127260 cri.go:76] found id: "b44816aceafff6e2c1dc7f3c5fdf67040bdda6af8ce411d3c2b91e57f4c1c301"
	I1019 23:54:20.276898  127260 cri.go:76] found id: "d906dad177d3793a7422e18ab831675903f8bc71452938fd6bebec707cc760ea"
	I1019 23:54:20.276907  127260 cri.go:76] found id: "4b2f253e8403b5b5bc40143fc51dc2554c5c39e05a4ca06c5532295cf5b389f7"
	I1019 23:54:20.276914  127260 cri.go:76] found id: "b0f4cdba330bf407993f1cc0ed15223626f085ddd413253b0f460b726317d836"
	I1019 23:54:20.276926  127260 cri.go:76] found id: ""
	I1019 23:54:20.276936  127260 cri.go:220] Stopping containers: [b44816aceafff6e2c1dc7f3c5fdf67040bdda6af8ce411d3c2b91e57f4c1c301 d906dad177d3793a7422e18ab831675903f8bc71452938fd6bebec707cc760ea 4b2f253e8403b5b5bc40143fc51dc2554c5c39e05a4ca06c5532295cf5b389f7 b0f4cdba330bf407993f1cc0ed15223626f085ddd413253b0f460b726317d836]
	I1019 23:54:20.276984  127260 ssh_runner.go:152] Run: which crictl
	I1019 23:54:20.280004  127260 ssh_runner.go:152] Run: sudo /usr/bin/crictl stop b44816aceafff6e2c1dc7f3c5fdf67040bdda6af8ce411d3c2b91e57f4c1c301 d906dad177d3793a7422e18ab831675903f8bc71452938fd6bebec707cc760ea 4b2f253e8403b5b5bc40143fc51dc2554c5c39e05a4ca06c5532295cf5b389f7 b0f4cdba330bf407993f1cc0ed15223626f085ddd413253b0f460b726317d836
	I1019 23:54:20.295954  127260 ssh_runner.go:152] Run: sudo systemctl stop kubelet
	I1019 23:54:20.305565  127260 ssh_runner.go:152] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1019 23:54:20.312738  127260 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5615 Oct 19 23:53 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5634 Oct 19 23:53 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2079 Oct 19 23:53 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5578 Oct 19 23:53 /etc/kubernetes/scheduler.conf
	
	I1019 23:54:20.312787  127260 ssh_runner.go:152] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1019 23:54:20.320835  127260 ssh_runner.go:152] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1019 23:54:20.327569  127260 ssh_runner.go:152] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1019 23:54:20.334476  127260 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1019 23:54:20.334524  127260 ssh_runner.go:152] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1019 23:54:20.340748  127260 ssh_runner.go:152] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1019 23:54:20.347749  127260 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1019 23:54:20.347792  127260 ssh_runner.go:152] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1019 23:54:20.356800  127260 ssh_runner.go:152] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1019 23:54:20.364374  127260 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1019 23:54:20.364388  127260 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1019 23:54:20.424048  127260 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1019 23:54:21.355542  127260 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1019 23:54:21.585352  127260 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1019 23:54:21.690354  127260 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1019 23:54:21.808051  127260 api_server.go:51] waiting for apiserver process to appear ...
	I1019 23:54:21.808111  127260 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 23:54:22.326455  127260 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 23:54:22.825477  127260 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 23:54:23.325280  127260 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 23:54:23.827946  127260 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 23:54:24.325791  127260 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 23:54:26.325891  127260 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 23:54:26.826059  127260 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 23:54:27.325599  127260 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 23:54:27.825675  127260 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 23:54:28.325982  127260 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 23:54:28.825438  127260 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 23:54:29.326017  127260 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 23:54:29.825701  127260 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 23:54:30.325635  127260 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 23:54:30.825899  127260 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 23:54:31.325346  127260 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 23:54:31.825823  127260 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 23:54:32.325917  127260 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 23:54:32.825705  127260 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 23:54:33.325774  127260 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 23:54:33.825922  127260 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	7141c9d1bc5b5       6e38f40d628db       19 seconds ago       Running             storage-provisioner       0                   f94a7deeaee11
	67717a61116e3       8d147537fb7d1       36 seconds ago       Running             coredns                   0                   2f0e26a91659a
	8e118a6817318       6de166512aa22       43 seconds ago       Running             kindnet-cni               0                   f431cd5d317c1
	5f570741dc102       873127efbc8a7       44 seconds ago       Running             kube-proxy                0                   2255d1c3ffe74
	c17f0c5af3c7c       5425bcbd23c54       About a minute ago   Running             kube-controller-manager   0                   51bfb259838f9
	f7fd5d2b98678       e64579b7d8862       About a minute ago   Running             kube-apiserver            0                   25169287aa5e5
	7ff3672f9c090       b51ddc1014b04       About a minute ago   Running             kube-scheduler            0                   995a9108c2854
	5b9dd31e0a54a       0048118155842       About a minute ago   Running             etcd                      0                   b6cfc672745fa
	
	* 
	* ==> containerd <==
	* -- Logs begin at Tue 2021-10-19 23:52:47 UTC, end at Tue 2021-10-19 23:54:36 UTC. --
	Oct 19 23:54:04 pause-20211019235243-6808 containerd[2266]: time="2021-10-19T23:54:04.112488363Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Oct 19 23:54:04 pause-20211019235243-6808 containerd[2266]: time="2021-10-19T23:54:04.112505626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Oct 19 23:54:04 pause-20211019235243-6808 containerd[2266]: time="2021-10-19T23:54:04.112520721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
	Oct 19 23:54:04 pause-20211019235243-6808 containerd[2266]: time="2021-10-19T23:54:04.112705777Z" level=warning msg="`default_runtime` is deprecated, please use `default_runtime_name` to reference the default configuration you have defined in `runtimes`"
	Oct 19 23:54:04 pause-20211019235243-6808 containerd[2266]: time="2021-10-19T23:54:04.112791244Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:default DefaultRuntime:{Type:io.containerd.runc.v2 Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:<nil> PrivilegedWithoutHostDevices:false BaseRuntimeSpec:} UntrustedWorkloadRuntime:{Type: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:<nil> PrivilegedWithoutHostDevices:false BaseRuntimeSpec:} Runtimes:map[default:{Type:io.containerd.runc.v2 Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:<nil> PrivilegedWithoutHostDevices:false BaseRuntimeSpec:} runc:{Type:io.containerd.runc.v2 Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:0xc0003d8f60 PrivilegedWithoutHostDevices:false BaseRuntimeSpec:}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginC
onfDir:/etc/cni/net.mk NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate:} Registry:{Mirrors:map[docker.io:{Endpoints:[https://registry-1.docker.io]}] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:} DisableTCPService:true StreamServerAddress: StreamServerPort:10010 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:k8s.gcr.io/pause:3.5 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true IgnoreImageDefinedVolumes:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
	Oct 19 23:54:04 pause-20211019235243-6808 containerd[2266]: time="2021-10-19T23:54:04.112884757Z" level=info msg="Connect containerd service"
	Oct 19 23:54:04 pause-20211019235243-6808 containerd[2266]: time="2021-10-19T23:54:04.112936696Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
	Oct 19 23:54:04 pause-20211019235243-6808 containerd[2266]: time="2021-10-19T23:54:04.113805939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Oct 19 23:54:04 pause-20211019235243-6808 containerd[2266]: time="2021-10-19T23:54:04.113915513Z" level=info msg="Start subscribing containerd event"
	Oct 19 23:54:04 pause-20211019235243-6808 containerd[2266]: time="2021-10-19T23:54:04.114021730Z" level=info msg="Start recovering state"
	Oct 19 23:54:04 pause-20211019235243-6808 containerd[2266]: time="2021-10-19T23:54:04.114053662Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Oct 19 23:54:04 pause-20211019235243-6808 containerd[2266]: time="2021-10-19T23:54:04.114098739Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Oct 19 23:54:04 pause-20211019235243-6808 containerd[2266]: time="2021-10-19T23:54:04.114171884Z" level=info msg="containerd successfully booted in 0.041280s"
	Oct 19 23:54:04 pause-20211019235243-6808 systemd[1]: Started containerd container runtime.
	Oct 19 23:54:04 pause-20211019235243-6808 containerd[2266]: time="2021-10-19T23:54:04.199808964Z" level=info msg="Start event monitor"
	Oct 19 23:54:04 pause-20211019235243-6808 containerd[2266]: time="2021-10-19T23:54:04.199901017Z" level=info msg="Start snapshots syncer"
	Oct 19 23:54:04 pause-20211019235243-6808 containerd[2266]: time="2021-10-19T23:54:04.199912912Z" level=info msg="Start cni network conf syncer"
	Oct 19 23:54:04 pause-20211019235243-6808 containerd[2266]: time="2021-10-19T23:54:04.199918049Z" level=info msg="Start streaming server"
	Oct 19 23:54:16 pause-20211019235243-6808 containerd[2266]: time="2021-10-19T23:54:16.768460974Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:7556b1b7-1180-43ef-8ff0-ee10aaa82bfe,Namespace:kube-system,Attempt:0,}"
	Oct 19 23:54:16 pause-20211019235243-6808 containerd[2266]: time="2021-10-19T23:54:16.787596029Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f94a7deeaee11858fa57001e5b2b78c83791f3cbc53624b6f2e1831215e60d60 pid=2558
	Oct 19 23:54:16 pause-20211019235243-6808 containerd[2266]: time="2021-10-19T23:54:16.965397751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:7556b1b7-1180-43ef-8ff0-ee10aaa82bfe,Namespace:kube-system,Attempt:0,} returns sandbox id \"f94a7deeaee11858fa57001e5b2b78c83791f3cbc53624b6f2e1831215e60d60\""
	Oct 19 23:54:16 pause-20211019235243-6808 containerd[2266]: time="2021-10-19T23:54:16.968083127Z" level=info msg="CreateContainer within sandbox \"f94a7deeaee11858fa57001e5b2b78c83791f3cbc53624b6f2e1831215e60d60\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	Oct 19 23:54:17 pause-20211019235243-6808 containerd[2266]: time="2021-10-19T23:54:17.025712973Z" level=info msg="CreateContainer within sandbox \"f94a7deeaee11858fa57001e5b2b78c83791f3cbc53624b6f2e1831215e60d60\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"7141c9d1bc5b52d1c9ffb0a56bc1ea0894d5137cde80ff31a3e5d2e881ad0ddf\""
	Oct 19 23:54:17 pause-20211019235243-6808 containerd[2266]: time="2021-10-19T23:54:17.026272194Z" level=info msg="StartContainer for \"7141c9d1bc5b52d1c9ffb0a56bc1ea0894d5137cde80ff31a3e5d2e881ad0ddf\""
	Oct 19 23:54:17 pause-20211019235243-6808 containerd[2266]: time="2021-10-19T23:54:17.177428190Z" level=info msg="StartContainer for \"7141c9d1bc5b52d1c9ffb0a56bc1ea0894d5137cde80ff31a3e5d2e881ad0ddf\" returns successfully"
	
	* 
	* ==> coredns [67717a61116e33321ebcbc26bb1066b02d77abca490421cdcda28484901dac38] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	CoreDNS-1.8.4
	linux/amd64, go1.16.4, 053c4d5
	
	* 
	* ==> describe nodes <==
	* Name:               pause-20211019235243-6808
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-20211019235243-6808
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4e20255d8fb01d11f0b16be976bdc5b5661029f6
	                    minikube.k8s.io/name=pause-20211019235243-6808
	                    minikube.k8s.io/updated_at=2021_10_19T23_53_34_0700
	                    minikube.k8s.io/version=v1.23.2
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 19 Oct 2021 23:53:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-20211019235243-6808
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 19 Oct 2021 23:54:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 19 Oct 2021 23:53:59 +0000   Tue, 19 Oct 2021 23:53:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 19 Oct 2021 23:53:59 +0000   Tue, 19 Oct 2021 23:53:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 19 Oct 2021 23:53:59 +0000   Tue, 19 Oct 2021 23:53:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 19 Oct 2021 23:53:59 +0000   Tue, 19 Oct 2021 23:53:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    pause-20211019235243-6808
	Capacity:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	System Info:
	  Machine ID:                 bba0be70c47c400ea3cf7733f1c0b4c1
	  System UUID:                53cd153d-c08e-408d-be46-7ceed59b6d66
	  Boot ID:                    3860adfd-184a-4be3-9030-9440e8dafeac
	  Kernel Version:             4.9.0-16-amd64
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.4.9
	  Kubelet Version:            v1.22.2
	  Kube-Proxy Version:         v1.22.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-78fcd69978-whpds                             100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     46s
	  kube-system                 etcd-pause-20211019235243-6808                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         57s
	  kube-system                 kindnet-hx6rh                                        100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      46s
	  kube-system                 kube-apiserver-pause-20211019235243-6808             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         57s
	  kube-system                 kube-controller-manager-pause-20211019235243-6808    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         57s
	  kube-system                 kube-proxy-4h4jt                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         46s
	  kube-system                 kube-scheduler-pause-20211019235243-6808             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         63s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From     Message
	  ----    ------                   ----               ----     -------
	  Normal  NodeHasSufficientMemory  78s (x5 over 78s)  kubelet  Node pause-20211019235243-6808 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    78s (x4 over 78s)  kubelet  Node pause-20211019235243-6808 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     78s (x4 over 78s)  kubelet  Node pause-20211019235243-6808 status is now: NodeHasSufficientPID
	  Normal  Starting                 58s                kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s                kubelet  Node pause-20211019235243-6808 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s                kubelet  Node pause-20211019235243-6808 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s                kubelet  Node pause-20211019235243-6808 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  57s                kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeReady                37s                kubelet  Node pause-20211019235243-6808 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.000002] ll header: 00000000: 02 42 2c 16 5e f6 02 42 c0 a8 31 02 08 00        .B,.^..B..1...
	[  +8.191389] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-df7d6eeea64b
	[  +0.000002] ll header: 00000000: 02 42 2c 16 5e f6 02 42 c0 a8 31 02 08 00        .B,.^..B..1...
	[Oct19 23:47] cgroup: cgroup2: unknown option "nsdelegate"
	[ +38.630908] IPv4: martian source 10.244.1.2 from 10.244.1.2, on dev vethea083f4d
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff c6 47 db ba ef ce 08 06        .......G......
	[Oct19 23:48] cgroup: cgroup2: unknown option "nsdelegate"
	[ +51.869070] cgroup: cgroup2: unknown option "nsdelegate"
	[Oct19 23:50] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethb4a33fa2
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 4a 54 bf 4a c5 f6 08 06        ......JT.J....
	[Oct19 23:51] cgroup: cgroup2: unknown option "nsdelegate"
	[ +51.597736] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth7849b5fd
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff d6 fb 2b ea 57 cf 08 06        ........+.W...
	[Oct19 23:52] cgroup: cgroup2: unknown option "nsdelegate"
	[ +14.224159] cgroup: cgroup2: unknown option "nsdelegate"
	[  +0.007127] cgroup: cgroup2: unknown option "nsdelegate"
	[  +1.023343] cgroup: cgroup2: unknown option "nsdelegate"
	[Oct19 23:53] cgroup: cgroup2: unknown option "nsdelegate"
	[ +31.773959] cgroup: cgroup2: unknown option "nsdelegate"
	[ +18.505037] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethc63035a3
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 5e de 92 2e 38 65 08 06        ......^...8e..
	[Oct19 23:54] cgroup: cgroup2: unknown option "nsdelegate"
	[ +12.750622] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth79402e87
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff d2 fd dd 02 d0 a0 08 06        ..............
	[ +21.283240] cgroup: cgroup2: unknown option "nsdelegate"
	
	* 
	* ==> etcd [5b9dd31e0a54a2ef258e3644d6aeb964a53c1c5cbea1f4c81d34ec48a98fd995] <==
	* {"level":"info","ts":"2021-10-19T23:54:34.422Z","caller":"traceutil/trace.go:171","msg":"trace[344966157] transaction","detail":"{read_only:false; response_revision:505; number_of_response:1; }","duration":"2.046584727s","start":"2021-10-19T23:54:32.375Z","end":"2021-10-19T23:54:34.422Z","steps":["trace[344966157] 'process raft request'  (duration: 534.697953ms)","trace[344966157] 'compare'  (duration: 1.511600354s)"],"step_count":2}
	{"level":"warn","ts":"2021-10-19T23:54:34.422Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"2.046547708s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/flowschemas/kube-controller-manager\" ","response":"range_response_count:1 size:1139"}
	{"level":"warn","ts":"2021-10-19T23:54:34.422Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-10-19T23:54:32.375Z","time spent":"2.046678401s","remote":"127.0.0.1:32788","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":118,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.49.2\" mod_revision:500 > success:<request_put:<key:\"/registry/masterleases/192.168.49.2\" value_size:67 lease:8128008432547652439 >> failure:<request_range:<key:\"/registry/masterleases/192.168.49.2\" > >"}
	{"level":"warn","ts":"2021-10-19T23:54:34.422Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.475514876s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" ","response":"range_response_count:1 size:4880"}
	{"level":"info","ts":"2021-10-19T23:54:34.422Z","caller":"traceutil/trace.go:171","msg":"trace[1107180228] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:1; response_revision:505; }","duration":"1.475553514s","start":"2021-10-19T23:54:32.947Z","end":"2021-10-19T23:54:34.422Z","steps":["trace[1107180228] 'agreement among raft nodes before linearized reading'  (duration: 1.475475784s)"],"step_count":1}
	{"level":"warn","ts":"2021-10-19T23:54:34.422Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"688.161593ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitions0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2021-10-19T23:54:34.422Z","caller":"traceutil/trace.go:171","msg":"trace[580604486] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions/; range_end:/registry/apiextensions.k8s.io/customresourcedefinitions0; response_count:0; response_revision:505; }","duration":"688.186046ms","start":"2021-10-19T23:54:33.734Z","end":"2021-10-19T23:54:34.422Z","steps":["trace[580604486] 'agreement among raft nodes before linearized reading'  (duration: 688.150914ms)"],"step_count":1}
	{"level":"warn","ts":"2021-10-19T23:54:34.422Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-10-19T23:54:32.947Z","time spent":"1.475606016s","remote":"127.0.0.1:32810","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":1,"response size":4904,"request content":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" "}
	{"level":"warn","ts":"2021-10-19T23:54:34.422Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-10-19T23:54:33.734Z","time spent":"688.22531ms","remote":"127.0.0.1:32784","response type":"/etcdserverpb.KV/Range","request count":0,"request size":120,"response count":0,"response size":29,"request content":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitions0\" count_only:true "}
	{"level":"warn","ts":"2021-10-19T23:54:34.422Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"733.747052ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1127"}
	{"level":"info","ts":"2021-10-19T23:54:34.422Z","caller":"traceutil/trace.go:171","msg":"trace[1263557950] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:505; }","duration":"733.785103ms","start":"2021-10-19T23:54:33.689Z","end":"2021-10-19T23:54:34.422Z","steps":["trace[1263557950] 'agreement among raft nodes before linearized reading'  (duration: 733.715029ms)"],"step_count":1}
	{"level":"warn","ts":"2021-10-19T23:54:34.422Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-10-19T23:54:33.689Z","time spent":"733.825261ms","remote":"127.0.0.1:32808","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1151,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"info","ts":"2021-10-19T23:54:34.422Z","caller":"traceutil/trace.go:171","msg":"trace[1798301214] range","detail":"{range_begin:/registry/flowschemas/kube-controller-manager; range_end:; response_count:1; response_revision:505; }","duration":"2.046580549s","start":"2021-10-19T23:54:32.376Z","end":"2021-10-19T23:54:34.422Z","steps":["trace[1798301214] 'agreement among raft nodes before linearized reading'  (duration: 2.04647783s)"],"step_count":1}
	{"level":"warn","ts":"2021-10-19T23:54:34.423Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-10-19T23:54:32.376Z","time spent":"2.047000599s","remote":"127.0.0.1:32934","response type":"/etcdserverpb.KV/Range","request count":0,"request size":47,"response count":1,"response size":1163,"request content":"key:\"/registry/flowschemas/kube-controller-manager\" "}
	{"level":"warn","ts":"2021-10-19T23:54:34.932Z","caller":"etcdserver/v3_server.go:815","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128008432547652451,"retry-timeout":"500ms"}
	{"level":"info","ts":"2021-10-19T23:54:35.101Z","caller":"traceutil/trace.go:171","msg":"trace[1518979986] linearizableReadLoop","detail":"{readStateIndex:529; appliedIndex:529; }","duration":"669.478298ms","start":"2021-10-19T23:54:34.432Z","end":"2021-10-19T23:54:35.101Z","steps":["trace[1518979986] 'read index received'  (duration: 669.468834ms)","trace[1518979986] 'applied index is now lower than readState.Index'  (duration: 8.219µs)"],"step_count":2}
	{"level":"warn","ts":"2021-10-19T23:54:35.101Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"605.086392ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/podsecuritypolicy/\" range_end:\"/registry/podsecuritypolicy0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2021-10-19T23:54:35.101Z","caller":"traceutil/trace.go:171","msg":"trace[1020965462] range","detail":"{range_begin:/registry/podsecuritypolicy/; range_end:/registry/podsecuritypolicy0; response_count:0; response_revision:506; }","duration":"605.144778ms","start":"2021-10-19T23:54:34.496Z","end":"2021-10-19T23:54:35.101Z","steps":["trace[1020965462] 'agreement among raft nodes before linearized reading'  (duration: 605.063501ms)"],"step_count":1}
	{"level":"warn","ts":"2021-10-19T23:54:35.101Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-10-19T23:54:34.496Z","time spent":"605.188475ms","remote":"127.0.0.1:32880","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":0,"response size":29,"request content":"key:\"/registry/podsecuritypolicy/\" range_end:\"/registry/podsecuritypolicy0\" count_only:true "}
	{"level":"warn","ts":"2021-10-19T23:54:35.102Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"413.288669ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/\" range_end:\"/registry/pods0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2021-10-19T23:54:35.102Z","caller":"traceutil/trace.go:171","msg":"trace[1594912121] range","detail":"{range_begin:/registry/pods/; range_end:/registry/pods0; response_count:0; response_revision:506; }","duration":"413.324235ms","start":"2021-10-19T23:54:34.688Z","end":"2021-10-19T23:54:35.102Z","steps":["trace[1594912121] 'agreement among raft nodes before linearized reading'  (duration: 413.267928ms)"],"step_count":1}
	{"level":"warn","ts":"2021-10-19T23:54:35.102Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-10-19T23:54:34.688Z","time spent":"413.362724ms","remote":"127.0.0.1:32812","response type":"/etcdserverpb.KV/Range","request count":0,"request size":36,"response count":8,"response size":31,"request content":"key:\"/registry/pods/\" range_end:\"/registry/pods0\" count_only:true "}
	{"level":"warn","ts":"2021-10-19T23:54:35.101Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"669.614661ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/prioritylevelconfigurations/node-high\" ","response":"range_response_count:1 size:694"}
	{"level":"info","ts":"2021-10-19T23:54:35.102Z","caller":"traceutil/trace.go:171","msg":"trace[264759016] range","detail":"{range_begin:/registry/prioritylevelconfigurations/node-high; range_end:; response_count:1; response_revision:506; }","duration":"670.013343ms","start":"2021-10-19T23:54:34.432Z","end":"2021-10-19T23:54:35.102Z","steps":["trace[264759016] 'agreement among raft nodes before linearized reading'  (duration: 669.553083ms)"],"step_count":1}
	{"level":"warn","ts":"2021-10-19T23:54:35.102Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-10-19T23:54:34.432Z","time spent":"670.228925ms","remote":"127.0.0.1:32936","response type":"/etcdserverpb.KV/Range","request count":0,"request size":49,"response count":1,"response size":718,"request content":"key:\"/registry/prioritylevelconfigurations/node-high\" "}
	
	* 
	* ==> kernel <==
	*  23:54:36 up  1:37,  0 users,  load average: 6.44, 2.92, 1.56
	Linux pause-20211019235243-6808 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [f7fd5d2b9867844d78c46d232049ec663270016db2809ec5dc7cff7b64ed42bb] <==
	* Trace[1120160651]: ---"About to write a response" 901ms (23:54:31.678)
	Trace[1120160651]: [901.340618ms] [901.340618ms] END
	I1019 23:54:31.678965       1 trace.go:205] Trace[1538620247]: "Get" url:/apis/flowcontrol.apiserver.k8s.io/v1beta1/flowschemas/system-nodes,user-agent:kube-apiserver/v1.22.2 (linux/amd64) kubernetes/8b5a191,audit-id:6c0d03ab-95b5-4aa5-a033-8a9a2e317208,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (19-Oct-2021 23:54:30.913) (total time: 765ms):
	Trace[1538620247]: ---"About to write a response" 765ms (23:54:31.678)
	Trace[1538620247]: [765.65658ms] [765.65658ms] END
	I1019 23:54:32.372486       1 trace.go:205] Trace[878600797]: "Get" url:/apis/flowcontrol.apiserver.k8s.io/v1beta1/flowschemas/probes,user-agent:kube-apiserver/v1.22.2 (linux/amd64) kubernetes/8b5a191,audit-id:27dd69d1-c900-4984-97fe-3da47a846bd5,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (19-Oct-2021 23:54:31.683) (total time: 689ms):
	Trace[878600797]: ---"About to write a response" 689ms (23:54:32.372)
	Trace[878600797]: [689.380214ms] [689.380214ms] END
	I1019 23:54:34.423114       1 trace.go:205] Trace[1879905114]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (19-Oct-2021 23:54:32.374) (total time: 2048ms):
	Trace[1879905114]: ---"Transaction committed" 2047ms (23:54:34.423)
	Trace[1879905114]: [2.048882202s] [2.048882202s] END
	I1019 23:54:34.423264       1 trace.go:205] Trace[1561096213]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:0,continue: (19-Oct-2021 23:54:32.946) (total time: 1476ms):
	Trace[1561096213]: [1.476610385s] [1.476610385s] END
	I1019 23:54:34.423410       1 trace.go:205] Trace[1696360612]: "Get" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:bb76f560-778c-496c-944f-00a8361a4a5f,client:192.168.49.2,accept:application/json, */*,protocol:HTTP/2.0 (19-Oct-2021 23:54:33.688) (total time: 734ms):
	Trace[1696360612]: ---"About to write a response" 734ms (23:54:34.423)
	Trace[1696360612]: [734.719034ms] [734.719034ms] END
	I1019 23:54:34.423462       1 trace.go:205] Trace[1050674558]: "Get" url:/apis/flowcontrol.apiserver.k8s.io/v1beta1/flowschemas/kube-controller-manager,user-agent:kube-apiserver/v1.22.2 (linux/amd64) kubernetes/8b5a191,audit-id:0634962f-282a-4198-b952-65f9856dc45b,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (19-Oct-2021 23:54:32.375) (total time: 2047ms):
	Trace[1050674558]: ---"About to write a response" 2047ms (23:54:34.423)
	Trace[1050674558]: [2.047631461s] [2.047631461s] END
	I1019 23:54:34.423700       1 trace.go:205] Trace[137594181]: "List" url:/api/v1/nodes,user-agent:kindnetd/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:7239fe15-d74b-4f0f-9d72-4691a6139ffc,client:192.168.49.2,accept:application/json, */*,protocol:HTTP/2.0 (19-Oct-2021 23:54:32.946) (total time: 1477ms):
	Trace[137594181]: ---"Listing from storage done" 1476ms (23:54:34.423)
	Trace[137594181]: [1.477065935s] [1.477065935s] END
	I1019 23:54:35.102951       1 trace.go:205] Trace[156861049]: "Get" url:/apis/flowcontrol.apiserver.k8s.io/v1beta1/prioritylevelconfigurations/node-high,user-agent:kube-apiserver/v1.22.2 (linux/amd64) kubernetes/8b5a191,audit-id:b736529f-3af5-423e-8646-f62e2f0ae567,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (19-Oct-2021 23:54:34.431) (total time: 671ms):
	Trace[156861049]: ---"About to write a response" 670ms (23:54:35.102)
	Trace[156861049]: [671.027828ms] [671.027828ms] END
	
	* 
	* ==> kube-controller-manager [c17f0c5af3c7c5c2a0f2ed0bf2a1d0654a71550afd751c6a0e7f736234423cef] <==
	* I1019 23:53:49.242542       1 shared_informer.go:247] Caches are synced for daemon sets 
	I1019 23:53:49.254293       1 shared_informer.go:247] Caches are synced for certificate-csrapproving 
	I1019 23:53:49.349271       1 shared_informer.go:247] Caches are synced for PVC protection 
	I1019 23:53:49.371111       1 shared_informer.go:247] Caches are synced for expand 
	I1019 23:53:49.384338       1 shared_informer.go:247] Caches are synced for disruption 
	I1019 23:53:49.384363       1 disruption.go:371] Sending events to api server.
	I1019 23:53:49.395961       1 shared_informer.go:247] Caches are synced for resource quota 
	I1019 23:53:49.405329       1 shared_informer.go:247] Caches are synced for stateful set 
	I1019 23:53:49.411590       1 shared_informer.go:247] Caches are synced for ephemeral 
	I1019 23:53:49.430378       1 shared_informer.go:247] Caches are synced for attach detach 
	I1019 23:53:49.434972       1 shared_informer.go:247] Caches are synced for persistent volume 
	I1019 23:53:49.440838       1 shared_informer.go:247] Caches are synced for resource quota 
	I1019 23:53:49.472196       1 shared_informer.go:247] Caches are synced for endpoint 
	I1019 23:53:49.480582       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I1019 23:53:49.640046       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-78fcd69978 to 2"
	I1019 23:53:49.856860       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-78fcd69978 to 1"
	I1019 23:53:49.873856       1 shared_informer.go:247] Caches are synced for garbage collector 
	I1019 23:53:49.892712       1 shared_informer.go:247] Caches are synced for garbage collector 
	I1019 23:53:49.892730       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1019 23:53:50.010599       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-4h4jt"
	I1019 23:53:50.010625       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-hx6rh"
	I1019 23:53:50.141382       1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-78fcd69978-v58hq"
	I1019 23:53:50.152234       1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-78fcd69978-whpds"
	I1019 23:53:50.171778       1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-78fcd69978-v58hq"
	I1019 23:53:59.191852       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	
	* 
	* ==> kube-proxy [5f570741dc1021db804ad8506cb914c8399cc863b2f79d81a1f5c9284ed89103] <==
	* I1019 23:53:52.186040       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I1019 23:53:52.186111       1 server_others.go:140] Detected node IP 192.168.49.2
	W1019 23:53:52.186140       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
	I1019 23:53:52.205748       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I1019 23:53:52.205781       1 server_others.go:212] Using iptables Proxier.
	I1019 23:53:52.205795       1 server_others.go:219] creating dualStackProxier for iptables.
	W1019 23:53:52.205808       1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I1019 23:53:52.206163       1 server.go:649] Version: v1.22.2
	I1019 23:53:52.206860       1 config.go:224] Starting endpoint slice config controller
	I1019 23:53:52.206880       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I1019 23:53:52.206902       1 config.go:315] Starting service config controller
	I1019 23:53:52.206907       1 shared_informer.go:240] Waiting for caches to sync for service config
	E1019 23:53:52.209064       1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pause-20211019235243-6808.16af93bd9469d598", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc053f4840c542e89, ext:66914097, loc:(*time.Location)(0x2d81340)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-pause-20211019235243-6808", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"pause-2021
1019235243-6808", UID:"pause-20211019235243-6808", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "pause-20211019235243-6808.16af93bd9469d598" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
	I1019 23:53:52.307726       1 shared_informer.go:247] Caches are synced for service config 
	I1019 23:53:52.307739       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [7ff3672f9c090d8f1094f7cedfac0961c84921065abf3237aa7088c62e868465] <==
	* I1019 23:53:30.634382       1 secure_serving.go:200] Serving securely on 127.0.0.1:10259
	I1019 23:53:30.634734       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 23:53:30.634890       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1019 23:53:30.635016       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E1019 23:53:30.644239       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1019 23:53:30.644325       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1019 23:53:30.644396       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1019 23:53:30.644452       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1019 23:53:30.644512       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1019 23:53:30.644569       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1019 23:53:30.644626       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1019 23:53:30.644688       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1019 23:53:30.644740       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1019 23:53:30.644798       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1019 23:53:30.644848       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1019 23:53:30.660041       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1019 23:53:30.663078       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1019 23:53:30.663189       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1019 23:53:30.669323       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1019 23:53:31.538543       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1019 23:53:31.654098       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1019 23:53:31.752887       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1019 23:53:31.827303       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1019 23:53:31.951665       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1019 23:53:34.235540       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2021-10-19 23:52:47 UTC, end at Tue 2021-10-19 23:54:36 UTC. --
	Oct 19 23:54:20 pause-20211019235243-6808 kubelet[2928]: Flag --network-plugin has been deprecated, will be removed along with dockershim.
	Oct 19 23:54:20 pause-20211019235243-6808 kubelet[2928]: Flag --runtime-request-timeout has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Oct 19 23:54:20 pause-20211019235243-6808 kubelet[2928]: Flag --cni-conf-dir has been deprecated, will be removed along with dockershim.
	Oct 19 23:54:20 pause-20211019235243-6808 kubelet[2928]: Flag --network-plugin has been deprecated, will be removed along with dockershim.
	Oct 19 23:54:20 pause-20211019235243-6808 kubelet[2928]: Flag --runtime-request-timeout has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Oct 19 23:54:20 pause-20211019235243-6808 kubelet[2928]: I1019 23:54:20.250672    2928 server.go:440] "Kubelet version" kubeletVersion="v1.22.2"
	Oct 19 23:54:20 pause-20211019235243-6808 kubelet[2928]: I1019 23:54:20.251020    2928 server.go:868] "Client rotation is on, will bootstrap in background"
	Oct 19 23:54:20 pause-20211019235243-6808 kubelet[2928]: I1019 23:54:20.253116    2928 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Oct 19 23:54:20 pause-20211019235243-6808 kubelet[2928]: I1019 23:54:20.254069    2928 dynamic_cafile_content.go:155] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 19 23:54:20 pause-20211019235243-6808 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Oct 19 23:54:20 pause-20211019235243-6808 kubelet[2928]: I1019 23:54:20.616542    2928 dynamic_cafile_content.go:170] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 19 23:54:25 pause-20211019235243-6808 kubelet[2928]: I1019 23:54:25.265601    2928 server.go:687] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	Oct 19 23:54:25 pause-20211019235243-6808 kubelet[2928]: I1019 23:54:25.265820    2928 container_manager_linux.go:280] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	Oct 19 23:54:25 pause-20211019235243-6808 kubelet[2928]: I1019 23:54:25.265870    2928 container_manager_linux.go:285] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
	Oct 19 23:54:25 pause-20211019235243-6808 kubelet[2928]: I1019 23:54:25.265892    2928 topology_manager.go:133] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
	Oct 19 23:54:25 pause-20211019235243-6808 kubelet[2928]: I1019 23:54:25.265901    2928 container_manager_linux.go:320] "Creating device plugin manager" devicePluginEnabled=true
	Oct 19 23:54:25 pause-20211019235243-6808 kubelet[2928]: I1019 23:54:25.265934    2928 state_mem.go:36] "Initialized new in-memory state store"
	Oct 19 23:54:25 pause-20211019235243-6808 kubelet[2928]: I1019 23:54:25.266186    2928 kubelet.go:418] "Attempting to sync node with API server"
	Oct 19 23:54:25 pause-20211019235243-6808 kubelet[2928]: I1019 23:54:25.266202    2928 kubelet.go:279] "Adding static pod path" path="/etc/kubernetes/manifests"
	Oct 19 23:54:25 pause-20211019235243-6808 kubelet[2928]: I1019 23:54:25.266222    2928 kubelet.go:290] "Adding apiserver pod source"
	Oct 19 23:54:25 pause-20211019235243-6808 kubelet[2928]: I1019 23:54:25.266233    2928 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	Oct 19 23:54:25 pause-20211019235243-6808 kubelet[2928]: I1019 23:54:25.317930    2928 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.4.9" apiVersion="v1alpha2"
	Oct 19 23:54:25 pause-20211019235243-6808 kubelet[2928]: I1019 23:54:25.318490    2928 server.go:1213] "Started kubelet"
	Oct 19 23:54:25 pause-20211019235243-6808 systemd[1]: kubelet.service: Succeeded.
	Oct 19 23:54:25 pause-20211019235243-6808 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> storage-provisioner [7141c9d1bc5b52d1c9ffb0a56bc1ea0894d5137cde80ff31a3e5d2e881ad0ddf] <==
	* I1019 23:54:17.185090       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1019 23:54:17.195505       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1019 23:54:17.195555       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1019 23:54:17.215068       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1019 23:54:17.215216       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-20211019235243-6808_d42f5349-b82a-4832-ae4c-2524ab6815c4!
	I1019 23:54:17.216650       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0ad7c128-ef1e-4334-919c-56318effd6f3", APIVersion:"v1", ResourceVersion:"495", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-20211019235243-6808_d42f5349-b82a-4832-ae4c-2524ab6815c4 became leader
	I1019 23:54:17.315700       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-20211019235243-6808_d42f5349-b82a-4832-ae4c-2524ab6815c4!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-20211019235243-6808 -n pause-20211019235243-6808
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-20211019235243-6808 -n pause-20211019235243-6808: exit status 2 (447.10273ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:261: (dbg) Run:  kubectl --context pause-20211019235243-6808 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: 
helpers_test.go:272: ======> post-mortem[TestPause/serial/PauseAgain]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context pause-20211019235243-6808 describe pod 
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context pause-20211019235243-6808 describe pod : exit status 1 (54.964186ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context pause-20211019235243-6808 describe pod : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/PauseAgain]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20211019235243-6808
helpers_test.go:235: (dbg) docker inspect pause-20211019235243-6808:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ca64042ca0a992e0db24bbcb8efadad8b86c99ff454893a54a4f418cfbccd996",
	        "Created": "2021-10-19T23:52:45.024014891Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 119055,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-10-19T23:52:46.857434767Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e2a6c047beddf8261495222adf87089305bbc18e350587b01ebe3725535b5871",
	        "ResolvConfPath": "/var/lib/docker/containers/ca64042ca0a992e0db24bbcb8efadad8b86c99ff454893a54a4f418cfbccd996/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ca64042ca0a992e0db24bbcb8efadad8b86c99ff454893a54a4f418cfbccd996/hostname",
	        "HostsPath": "/var/lib/docker/containers/ca64042ca0a992e0db24bbcb8efadad8b86c99ff454893a54a4f418cfbccd996/hosts",
	        "LogPath": "/var/lib/docker/containers/ca64042ca0a992e0db24bbcb8efadad8b86c99ff454893a54a4f418cfbccd996/ca64042ca0a992e0db24bbcb8efadad8b86c99ff454893a54a4f418cfbccd996-json.log",
	        "Name": "/pause-20211019235243-6808",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-20211019235243-6808:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-20211019235243-6808",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/164e112dc8082320ff66e29a5bbd10345b283e77bda8403b0d1eff66c19631ed-init/diff:/var/lib/docker/overlay2/c5f02e45537672705e6e2aba01e28e51c2d46e9e49272769c2467ad01e3a8798/diff:/var/lib/docker/overlay2/0ea8f075729b71baf4ccd823eb0c485ef5ef29bac0353c53e6065129629066ba/diff:/var/lib/docker/overlay2/88533b581a0009f254ebbf2a93788f6fea438104765bb8cd83e8242f6f007976/diff:/var/lib/docker/overlay2/14bd4aa948d82ba23130a19a3ed9b4398666fdb3b6961d42dfbea97f89c5cc8d/diff:/var/lib/docker/overlay2/c45d302040f8b5fa2a3863603e33a4a8945fc269c7c7d5b5fed4c0af74e1a318/diff:/var/lib/docker/overlay2/895bf38a811e31bbe8c73359865eb48ad82bbb675c072510dec392e208b25255/diff:/var/lib/docker/overlay2/888eda45177c40a5e8c507f72bfcd0e63d9fb3fbf18304e303b30f7858d75385/diff:/var/lib/docker/overlay2/1ba6be2d4915efd99d70838596bc306c3a5986d1687b7b5c82e3bcce08f2739a/diff:/var/lib/docker/overlay2/dba40492bd25835ae3cae4014121bd7743a6df6c6b6f568b50a45660d36da1ee/diff:/var/lib/docker/overlay2/f5e2e9
f4ecf2e4c242e9a2d0f40f87313ae18433e49a64f8e612bc21342ba35a/diff:/var/lib/docker/overlay2/55c3bd632609db28efc8e436188a29be66e40c950a2f07ba9a636848bcf8fe54/diff:/var/lib/docker/overlay2/e743e9a12912c11573135dbbc7915737001fecbbddc57c8609bb97c48e477ab5/diff:/var/lib/docker/overlay2/0d2ab28eea343e33cf288e808e3583a4ef3a2d0f0e152cb4b6a696fe2829aa37/diff:/var/lib/docker/overlay2/a9d9379f41d6df50597861e1eca41e6abb25962190c7c4bf07f9b6cecd398ab8/diff:/var/lib/docker/overlay2/246f3a84f3d47634be292753e6ab114ff0fdd6c19174f98d20d3631b985bd65a/diff:/var/lib/docker/overlay2/1555899ff98b2aefb432f4f7413f555e1419a0c7f1ff8906ff527987be55f98f/diff:/var/lib/docker/overlay2/2046d76f6555966732657251ae8f140fcb5c9ba7becf9f4ea2f38d02553b81a8/diff:/var/lib/docker/overlay2/45481a57311300993df1fbf7c1affd06d3f1b0b4f3d4786b38e20ed6419a53d4/diff:/var/lib/docker/overlay2/3e461aa0d9087cb03e88430b7acc263fdb5b7c9685b89e24b2e636c0b764ff95/diff:/var/lib/docker/overlay2/fd0a889ad13f00ae1f69b1916a5927927a49c38948b00d54db2145eed106176b/diff:/var/lib/d
ocker/overlay2/b9ba91322e89a003f05f9d5f11671de5af7cda4550b6f60c9d4ff90724e4e6f9/diff:/var/lib/docker/overlay2/e778c4cc31a31336a7ba77c6088ba8748d9c140197750d9e6c90b127c01e530b/diff:/var/lib/docker/overlay2/95fbf604ebdd8cc367f642230ac0a68cbc72994b41f02db95f354c4fdc75af24/diff:/var/lib/docker/overlay2/3f2110c7a6beb28d94b23699e2ca6363aa4ea63d43035d2e539e47fcd331a2ba/diff:/var/lib/docker/overlay2/1bf0ef7d7a45ab3ea27aa3d2f6b34dce6e59fc18e7e464e718352e8ea7029fd8/diff:/var/lib/docker/overlay2/23c70a1791e26b5ae01282413205d77ba4ecd95b3e170ea40176e856715a9359/diff:/var/lib/docker/overlay2/d830811487929eab10415090b3d97a6d7865d450267b84aa4184392814d8b1df/diff:/var/lib/docker/overlay2/b36fdfbf0198d3321dcce07dba3a6910f2cbe2dc276d6530613f56cd94afcc31/diff:/var/lib/docker/overlay2/cc70ad4a43336c6e67e06e2c9e0a2f30140594f1db80424144ea2b077c4ab1af/diff:/var/lib/docker/overlay2/d728e7526151cbb5fa43fbde71e65b654ce3b44983512cd2e690c8798d0d3ef8/diff:/var/lib/docker/overlay2/d701d06f30bc941260abb1757ee832c9aa2195a14f1552bd9d4baf5e614
4d5ad/diff:/var/lib/docker/overlay2/6bd5ed3108886521103186b5318e03a8d90f0dbc80d27990929d83cd889e533d/diff:/var/lib/docker/overlay2/e27df6450133558da7555ef518af1a37ae6a07ce47c6767fec076ca914b56d50/diff:/var/lib/docker/overlay2/f820866aa600bd3aaac0b53c24ed1df712be2f6192e572cb551d67d8ed5f2ae8/diff:/var/lib/docker/overlay2/867ec5e44c5a5ca3a4bbcb5e3ebe23023fd7c7f42cc297619dd37fa19e58a4ce/diff:/var/lib/docker/overlay2/1de77d9dd95b7ad0b72d0e674dbf02cf40000662dc54842d9d003febd75e9183/diff:/var/lib/docker/overlay2/8249f5dae9e196bbeeca2cec8401a9bd3dc73ea9eb3884270fea296047dc72e1/diff:/var/lib/docker/overlay2/d1a43bd2c1485a2051100b28c50ca4afb530e7a9cace2b7ed1bb19098a8b1b6c/diff:/var/lib/docker/overlay2/e5626256f4126d2d314b1737c78f12ceabf819f05f933b8539d23c83ed360571/diff:/var/lib/docker/overlay2/0e28b1b6d42bc8ec33754e6a4d94556573199f71a1745d89b48ecf4e53c4b9d7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/164e112dc8082320ff66e29a5bbd10345b283e77bda8403b0d1eff66c19631ed/merged",
	                "UpperDir": "/var/lib/docker/overlay2/164e112dc8082320ff66e29a5bbd10345b283e77bda8403b0d1eff66c19631ed/diff",
	                "WorkDir": "/var/lib/docker/overlay2/164e112dc8082320ff66e29a5bbd10345b283e77bda8403b0d1eff66c19631ed/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-20211019235243-6808",
	                "Source": "/var/lib/docker/volumes/pause-20211019235243-6808/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-20211019235243-6808",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.27-1633027942-12081@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-20211019235243-6808",
	                "name.minikube.sigs.k8s.io": "pause-20211019235243-6808",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c30954b550a8a0c7fccb8feca23da872fa02f79e0ed590ddb7cdcf6bac975f27",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32887"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32886"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32883"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32885"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32884"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c30954b550a8",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-20211019235243-6808": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "ca64042ca0a9"
	                    ],
	                    "NetworkID": "25cdaedb005cd1adf2c3e34e38db740b67fb4515aa837a06ae69830a0360464d",
	                    "EndpointID": "ff2efd15a8d6db9810ade83164075c4e794aff259445b744b6374f30856ece63",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-20211019235243-6808 -n pause-20211019235243-6808
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-20211019235243-6808 -n pause-20211019235243-6808: exit status 2 (415.696965ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestPause/serial/PauseAgain FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/PauseAgain]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-20211019235243-6808 logs -n 25
helpers_test.go:252: TestPause/serial/PauseAgain logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                   Args                   |                 Profile                  |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | -p                                       | multinode-20211019233815-6808-m03        | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:48:42 UTC | Tue, 19 Oct 2021 23:48:45 UTC |
	|         | multinode-20211019233815-6808-m03        |                                          |         |         |                               |                               |
	| delete  | -p                                       | multinode-20211019233815-6808            | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:48:45 UTC | Tue, 19 Oct 2021 23:48:50 UTC |
	|         | multinode-20211019233815-6808            |                                          |         |         |                               |                               |
	| start   | -p                                       | test-preload-20211019234850-6808         | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:48:50 UTC | Tue, 19 Oct 2021 23:50:15 UTC |
	|         | test-preload-20211019234850-6808         |                                          |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr          |                                          |         |         |                               |                               |
	|         | --wait=true --preload=false              |                                          |         |         |                               |                               |
	|         | --driver=docker                          |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	|         | --kubernetes-version=v1.17.0             |                                          |         |         |                               |                               |
	| ssh     | -p                                       | test-preload-20211019234850-6808         | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:50:15 UTC | Tue, 19 Oct 2021 23:50:16 UTC |
	|         | test-preload-20211019234850-6808         |                                          |         |         |                               |                               |
	|         | -- sudo crictl pull busybox              |                                          |         |         |                               |                               |
	| start   | -p                                       | test-preload-20211019234850-6808         | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:50:17 UTC | Tue, 19 Oct 2021 23:50:57 UTC |
	|         | test-preload-20211019234850-6808         |                                          |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr          |                                          |         |         |                               |                               |
	|         | -v=1 --wait=true --driver=docker         |                                          |         |         |                               |                               |
	|         |  --container-runtime=containerd          |                                          |         |         |                               |                               |
	|         | --kubernetes-version=v1.17.3             |                                          |         |         |                               |                               |
	| ssh     | -p                                       | test-preload-20211019234850-6808         | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:50:57 UTC | Tue, 19 Oct 2021 23:50:57 UTC |
	|         | test-preload-20211019234850-6808         |                                          |         |         |                               |                               |
	|         | -- sudo crictl image ls                  |                                          |         |         |                               |                               |
	| delete  | -p                                       | test-preload-20211019234850-6808         | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:50:57 UTC | Tue, 19 Oct 2021 23:51:00 UTC |
	|         | test-preload-20211019234850-6808         |                                          |         |         |                               |                               |
	| start   | -p                                       | scheduled-stop-20211019235100-6808       | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:51:00 UTC | Tue, 19 Oct 2021 23:51:43 UTC |
	|         | scheduled-stop-20211019235100-6808       |                                          |         |         |                               |                               |
	|         | --memory=2048 --driver=docker            |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| stop    | -p                                       | scheduled-stop-20211019235100-6808       | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:51:44 UTC | Tue, 19 Oct 2021 23:51:44 UTC |
	|         | scheduled-stop-20211019235100-6808       |                                          |         |         |                               |                               |
	|         | --cancel-scheduled                       |                                          |         |         |                               |                               |
	| stop    | -p                                       | scheduled-stop-20211019235100-6808       | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:51:57 UTC | Tue, 19 Oct 2021 23:52:22 UTC |
	|         | scheduled-stop-20211019235100-6808       |                                          |         |         |                               |                               |
	|         | --schedule 5s                            |                                          |         |         |                               |                               |
	| delete  | -p                                       | scheduled-stop-20211019235100-6808       | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:52:24 UTC | Tue, 19 Oct 2021 23:52:29 UTC |
	|         | scheduled-stop-20211019235100-6808       |                                          |         |         |                               |                               |
	| delete  | -p                                       | insufficient-storage-20211019235229-6808 | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:52:36 UTC | Tue, 19 Oct 2021 23:52:43 UTC |
	|         | insufficient-storage-20211019235229-6808 |                                          |         |         |                               |                               |
	| start   | -p                                       | force-systemd-env-20211019235243-6808    | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:52:43 UTC | Tue, 19 Oct 2021 23:53:50 UTC |
	|         | force-systemd-env-20211019235243-6808    |                                          |         |         |                               |                               |
	|         | --memory=2048 --alsologtostderr          |                                          |         |         |                               |                               |
	|         | -v=5 --driver=docker                     |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| -p      | force-systemd-env-20211019235243-6808    | force-systemd-env-20211019235243-6808    | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:53:50 UTC | Tue, 19 Oct 2021 23:53:51 UTC |
	|         | ssh cat /etc/containerd/config.toml      |                                          |         |         |                               |                               |
	| delete  | -p                                       | force-systemd-env-20211019235243-6808    | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:53:51 UTC | Tue, 19 Oct 2021 23:53:54 UTC |
	|         | force-systemd-env-20211019235243-6808    |                                          |         |         |                               |                               |
	| delete  | -p kubenet-20211019235354-6808           | kubenet-20211019235354-6808              | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:53:54 UTC | Tue, 19 Oct 2021 23:53:54 UTC |
	| delete  | -p flannel-20211019235354-6808           | flannel-20211019235354-6808              | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:53:54 UTC | Tue, 19 Oct 2021 23:53:55 UTC |
	| delete  | -p false-20211019235355-6808             | false-20211019235355-6808                | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:53:55 UTC | Tue, 19 Oct 2021 23:53:56 UTC |
	| start   | -p pause-20211019235243-6808             | pause-20211019235243-6808                | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:52:43 UTC | Tue, 19 Oct 2021 23:54:01 UTC |
	|         | --memory=2048                            |                                          |         |         |                               |                               |
	|         | --install-addons=false                   |                                          |         |         |                               |                               |
	|         | --wait=all --driver=docker               |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| start   | -p                                       | offline-containerd-20211019235243-6808   | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:52:43 UTC | Tue, 19 Oct 2021 23:54:16 UTC |
	|         | offline-containerd-20211019235243-6808   |                                          |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --memory=2048     |                                          |         |         |                               |                               |
	|         | --wait=true --driver=docker              |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| start   | -p pause-20211019235243-6808             | pause-20211019235243-6808                | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:54:01 UTC | Tue, 19 Oct 2021 23:54:17 UTC |
	|         | --alsologtostderr                        |                                          |         |         |                               |                               |
	|         | -v=1 --driver=docker                     |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| pause   | -p pause-20211019235243-6808             | pause-20211019235243-6808                | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:54:17 UTC | Tue, 19 Oct 2021 23:54:18 UTC |
	|         | --alsologtostderr -v=5                   |                                          |         |         |                               |                               |
	| unpause | -p pause-20211019235243-6808             | pause-20211019235243-6808                | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:54:19 UTC | Tue, 19 Oct 2021 23:54:20 UTC |
	|         | --alsologtostderr -v=5                   |                                          |         |         |                               |                               |
	| delete  | -p                                       | offline-containerd-20211019235243-6808   | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:54:16 UTC | Tue, 19 Oct 2021 23:54:20 UTC |
	|         | offline-containerd-20211019235243-6808   |                                          |         |         |                               |                               |
	| -p      | pause-20211019235243-6808 logs           | pause-20211019235243-6808                | jenkins | v1.23.2 | Tue, 19 Oct 2021 23:54:31 UTC | Tue, 19 Oct 2021 23:54:36 UTC |
	|         | -n 25                                    |                                          |         |         |                               |                               |
	|---------|------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/10/19 23:54:35
	Running on machine: debian-jenkins-agent-14
	Binary: Built with gc go1.17.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 23:54:35.597478  137989 out.go:297] Setting OutFile to fd 1 ...
	I1019 23:54:35.597552  137989 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1019 23:54:35.597561  137989 out.go:310] Setting ErrFile to fd 2...
	I1019 23:54:35.597565  137989 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1019 23:54:35.597666  137989 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/bin
	I1019 23:54:35.597898  137989 out.go:304] Setting JSON to false
	I1019 23:54:35.646787  137989 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":5838,"bootTime":1634681838,"procs":287,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I1019 23:54:35.646899  137989 start.go:121] virtualization: kvm guest
	I1019 23:54:35.649788  137989 out.go:176] * [running-upgrade-20211019235358-6808] minikube v1.23.2 on Debian 9.13 (kvm/amd64)
	I1019 23:54:35.649920  137989 notify.go:169] Checking for updates...
	I1019 23:54:35.651347  137989 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/kubeconfig
	I1019 23:54:35.652891  137989 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 23:54:35.654400  137989 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube
	I1019 23:54:35.655868  137989 out.go:176]   - MINIKUBE_LOCATION=12739
	I1019 23:54:35.656809  137989 config.go:176] Loaded profile config "running-upgrade-20211019235358-6808": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I1019 23:54:35.659341  137989 out.go:176] * Kubernetes 1.22.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.22.2
	I1019 23:54:35.659382  137989 driver.go:343] Setting default libvirt URI to qemu:///system
	I1019 23:54:35.736004  137989 docker.go:132] docker version: linux-19.03.15
	I1019 23:54:35.736136  137989 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1019 23:54:35.864402  137989 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:188 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:81 OomKillDisable:true NGoroutines:90 SystemTime:2021-10-19 23:54:35.789468355 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1019 23:54:35.864528  137989 docker.go:237] overlay module found
	I1019 23:54:35.867933  137989 out.go:176] * Using the docker driver based on existing profile
	I1019 23:54:35.867963  137989 start.go:278] selected driver: docker
	I1019 23:54:35.867969  137989 start.go:751] validating driver "docker" against &{Name:running-upgrade-20211019235358-6808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:running-upgrade-20211019235358-6808 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.153 Port:8443 KubernetesVersion:v1.20.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:}
	I1019 23:54:35.868074  137989 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W1019 23:54:35.868116  137989 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W1019 23:54:35.868139  137989 out.go:241] ! Your cgroup does not allow setting memory.
	I1019 23:54:35.869623  137989 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I1019 23:54:35.870507  137989 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1019 23:54:35.993525  137989 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:188 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:77 OomKillDisable:true NGoroutines:78 SystemTime:2021-10-19 23:54:35.922407158 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	W1019 23:54:35.993702  137989 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W1019 23:54:35.993746  137989 out.go:241] ! Your cgroup does not allow setting memory.
	I1019 23:54:35.996074  137989 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I1019 23:54:35.996173  137989 cni.go:93] Creating CNI manager for ""
	I1019 23:54:35.996186  137989 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1019 23:54:35.996200  137989 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I1019 23:54:35.996210  137989 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I1019 23:54:35.996221  137989 start_flags.go:280] config:
	{Name:running-upgrade-20211019235358-6808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:running-upgrade-20211019235358-6808 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.153 Port:8443 KubernetesVersion:v1.20.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:}
	I1019 23:54:35.998040  137989 out.go:176] * Starting control plane node running-upgrade-20211019235358-6808 in cluster running-upgrade-20211019235358-6808
	I1019 23:54:35.998075  137989 cache.go:118] Beginning downloading kic base image for docker with containerd
	I1019 23:54:35.999629  137989 out.go:176] * Pulling base image ...
	I1019 23:54:35.999659  137989 preload.go:130] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1019 23:54:35.999708  137989 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 in local docker daemon
	W1019 23:54:36.026808  137989 preload.go:113] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v13-v1.20.0-containerd-overlay2-amd64.tar.lz4 status code: 404
	I1019 23:54:36.026974  137989 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/running-upgrade-20211019235358-6808/config.json ...
	I1019 23:54:36.027320  137989 cache.go:107] acquiring lock: {Name:mk2635e7a89632cdd97367c8ef161acc91ec9c6d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 23:54:36.027494  137989 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7 exists
	I1019 23:54:36.027516  137989 cache.go:96] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.7" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7" took 204.204µs
	I1019 23:54:36.027510  137989 cache.go:107] acquiring lock: {Name:mkcc2743c6abd34b5b1db2c9a92204b7b245f4aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 23:54:36.027539  137989 cache.go:80] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.7 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7 succeeded
	I1019 23:54:36.027557  137989 cache.go:107] acquiring lock: {Name:mk38628226079db58341c9de050490be461e68ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 23:54:36.027609  137989 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-0 exists
	I1019 23:54:36.027627  137989 cache.go:96] cache image "k8s.gcr.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-0" took 72.983µs
	I1019 23:54:36.027637  137989 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/images/k8s.gcr.io/pause_3.2 exists
	I1019 23:54:36.027642  137989 cache.go:80] save to tar file k8s.gcr.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-0 succeeded
	I1019 23:54:36.027658  137989 cache.go:96] cache image "k8s.gcr.io/pause:3.2" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/images/k8s.gcr.io/pause_3.2" took 163.039µs
	I1019 23:54:36.027671  137989 cache.go:80] save to tar file k8s.gcr.io/pause:3.2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/images/k8s.gcr.io/pause_3.2 succeeded
	I1019 23:54:36.027657  137989 cache.go:107] acquiring lock: {Name:mk4cbdd89a6325d0eb294ea58a7cd94cc35d5456 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 23:54:36.027675  137989 cache.go:107] acquiring lock: {Name:mk0ec939e762eb3225ee7d849c42be7428471c37 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 23:54:36.027723  137989 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/images/k8s.gcr.io/coredns_1.7.0 exists
	I1019 23:54:36.027743  137989 cache.go:96] cache image "k8s.gcr.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/images/k8s.gcr.io/coredns_1.7.0" took 87.726µs
	I1019 23:54:36.027758  137989 cache.go:80] save to tar file k8s.gcr.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/images/k8s.gcr.io/coredns_1.7.0 succeeded
	I1019 23:54:36.027738  137989 cache.go:107] acquiring lock: {Name:mk625061f76d51b1e6c66438d160032e271729fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 23:54:36.027764  137989 cache.go:107] acquiring lock: {Name:mkf1289102fe96f059ad5528cfab642f20eb43ee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 23:54:36.027806  137989 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1 exists
	I1019 23:54:36.027832  137989 cache.go:107] acquiring lock: {Name:mk7649a02786735c5fac86fb3b14b8d0d03347f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 23:54:36.027850  137989 cache.go:96] cache image "docker.io/kubernetesui/dashboard:v2.3.1" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1" took 113.618µs
	I1019 23:54:36.027869  137989 cache.go:80] save to tar file docker.io/kubernetesui/dashboard:v2.3.1 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1 succeeded
	I1019 23:54:36.027809  137989 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.20.0 exists
	I1019 23:54:36.027889  137989 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.20.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.20.0" took 126.691µs
	I1019 23:54:36.027891  137989 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.20.0 exists
	I1019 23:54:36.027911  137989 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.20.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.20.0" took 80.995µs
	I1019 23:54:36.027920  137989 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.20.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.20.0 succeeded
	I1019 23:54:36.027897  137989 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.20.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.20.0 succeeded
	I1019 23:54:36.027914  137989 cache.go:107] acquiring lock: {Name:mkc86f6bf5d9e0bb18b89e36646d1b1b8f50eee3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 23:54:36.027938  137989 cache.go:107] acquiring lock: {Name:mk346b13e1ee4039933c38e3e3f181c10df7657e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 23:54:36.027971  137989 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.20.0 exists
	I1019 23:54:36.027985  137989 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.20.0 exists
	I1019 23:54:36.027985  137989 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.20.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.20.0" took 75.521µs
	I1019 23:54:36.028003  137989 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.20.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.20.0 succeeded
	I1019 23:54:36.028001  137989 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.20.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.20.0" took 65.57µs
	I1019 23:54:36.027728  137989 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1019 23:54:36.028012  137989 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.20.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.20.0 succeeded
	I1019 23:54:36.028024  137989 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" took 350.482µs
	I1019 23:54:36.028038  137989 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1019 23:54:36.028051  137989 cache.go:87] Successfully saved all images to host disk.
	I1019 23:54:36.139213  137989 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 in local docker daemon, skipping pull
	I1019 23:54:36.139239  137989 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 exists in daemon, skipping load
	I1019 23:54:36.139256  137989 cache.go:206] Successfully downloaded all kic artifacts
	I1019 23:54:36.139291  137989 start.go:313] acquiring machines lock for running-upgrade-20211019235358-6808: {Name:mka1eff290b7fafd9df7f24180fea625d64c63e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 23:54:36.139391  137989 start.go:317] acquired machines lock for "running-upgrade-20211019235358-6808" in 76.359µs
	I1019 23:54:36.139417  137989 start.go:93] Skipping create...Using existing machine configuration
	I1019 23:54:36.139423  137989 fix.go:55] fixHost starting: 
	I1019 23:54:36.139771  137989 cli_runner.go:115] Run: docker container inspect running-upgrade-20211019235358-6808 --format={{.State.Status}}
	I1019 23:54:36.190003  137989 fix.go:108] recreateIfNeeded on running-upgrade-20211019235358-6808: state=Running err=<nil>
	W1019 23:54:36.190036  137989 fix.go:134] unexpected machine state, will restart: <nil>
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	7141c9d1bc5b5       6e38f40d628db       21 seconds ago       Running             storage-provisioner       0                   f94a7deeaee11
	67717a61116e3       8d147537fb7d1       38 seconds ago       Running             coredns                   0                   2f0e26a91659a
	8e118a6817318       6de166512aa22       46 seconds ago       Running             kindnet-cni               0                   f431cd5d317c1
	5f570741dc102       873127efbc8a7       46 seconds ago       Running             kube-proxy                0                   2255d1c3ffe74
	c17f0c5af3c7c       5425bcbd23c54       About a minute ago   Running             kube-controller-manager   0                   51bfb259838f9
	f7fd5d2b98678       e64579b7d8862       About a minute ago   Running             kube-apiserver            0                   25169287aa5e5
	7ff3672f9c090       b51ddc1014b04       About a minute ago   Running             kube-scheduler            0                   995a9108c2854
	5b9dd31e0a54a       0048118155842       About a minute ago   Running             etcd                      0                   b6cfc672745fa
	
	* 
	* ==> containerd <==
	* -- Logs begin at Tue 2021-10-19 23:52:47 UTC, end at Tue 2021-10-19 23:54:38 UTC. --
	Oct 19 23:54:04 pause-20211019235243-6808 containerd[2266]: time="2021-10-19T23:54:04.112488363Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Oct 19 23:54:04 pause-20211019235243-6808 containerd[2266]: time="2021-10-19T23:54:04.112505626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Oct 19 23:54:04 pause-20211019235243-6808 containerd[2266]: time="2021-10-19T23:54:04.112520721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
	Oct 19 23:54:04 pause-20211019235243-6808 containerd[2266]: time="2021-10-19T23:54:04.112705777Z" level=warning msg="`default_runtime` is deprecated, please use `default_runtime_name` to reference the default configuration you have defined in `runtimes`"
	Oct 19 23:54:04 pause-20211019235243-6808 containerd[2266]: time="2021-10-19T23:54:04.112791244Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:default DefaultRuntime:{Type:io.containerd.runc.v2 Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:<nil> PrivilegedWithoutHostDevices:false BaseRuntimeSpec:} UntrustedWorkloadRuntime:{Type: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:<nil> PrivilegedWithoutHostDevices:false BaseRuntimeSpec:} Runtimes:map[default:{Type:io.containerd.runc.v2 Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:<nil> PrivilegedWithoutHostDevices:false BaseRuntimeSpec:} runc:{Type:io.containerd.runc.v2 Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:0xc0003d8f60 PrivilegedWithoutHostDevices:false BaseRuntimeSpec:}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginC
onfDir:/etc/cni/net.mk NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate:} Registry:{Mirrors:map[docker.io:{Endpoints:[https://registry-1.docker.io]}] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:} DisableTCPService:true StreamServerAddress: StreamServerPort:10010 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:k8s.gcr.io/pause:3.5 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true IgnoreImageDefinedVolumes:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
	Oct 19 23:54:04 pause-20211019235243-6808 containerd[2266]: time="2021-10-19T23:54:04.112884757Z" level=info msg="Connect containerd service"
	Oct 19 23:54:04 pause-20211019235243-6808 containerd[2266]: time="2021-10-19T23:54:04.112936696Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
	Oct 19 23:54:04 pause-20211019235243-6808 containerd[2266]: time="2021-10-19T23:54:04.113805939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Oct 19 23:54:04 pause-20211019235243-6808 containerd[2266]: time="2021-10-19T23:54:04.113915513Z" level=info msg="Start subscribing containerd event"
	Oct 19 23:54:04 pause-20211019235243-6808 containerd[2266]: time="2021-10-19T23:54:04.114021730Z" level=info msg="Start recovering state"
	Oct 19 23:54:04 pause-20211019235243-6808 containerd[2266]: time="2021-10-19T23:54:04.114053662Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Oct 19 23:54:04 pause-20211019235243-6808 containerd[2266]: time="2021-10-19T23:54:04.114098739Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Oct 19 23:54:04 pause-20211019235243-6808 containerd[2266]: time="2021-10-19T23:54:04.114171884Z" level=info msg="containerd successfully booted in 0.041280s"
	Oct 19 23:54:04 pause-20211019235243-6808 systemd[1]: Started containerd container runtime.
	Oct 19 23:54:04 pause-20211019235243-6808 containerd[2266]: time="2021-10-19T23:54:04.199808964Z" level=info msg="Start event monitor"
	Oct 19 23:54:04 pause-20211019235243-6808 containerd[2266]: time="2021-10-19T23:54:04.199901017Z" level=info msg="Start snapshots syncer"
	Oct 19 23:54:04 pause-20211019235243-6808 containerd[2266]: time="2021-10-19T23:54:04.199912912Z" level=info msg="Start cni network conf syncer"
	Oct 19 23:54:04 pause-20211019235243-6808 containerd[2266]: time="2021-10-19T23:54:04.199918049Z" level=info msg="Start streaming server"
	Oct 19 23:54:16 pause-20211019235243-6808 containerd[2266]: time="2021-10-19T23:54:16.768460974Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:7556b1b7-1180-43ef-8ff0-ee10aaa82bfe,Namespace:kube-system,Attempt:0,}"
	Oct 19 23:54:16 pause-20211019235243-6808 containerd[2266]: time="2021-10-19T23:54:16.787596029Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f94a7deeaee11858fa57001e5b2b78c83791f3cbc53624b6f2e1831215e60d60 pid=2558
	Oct 19 23:54:16 pause-20211019235243-6808 containerd[2266]: time="2021-10-19T23:54:16.965397751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:7556b1b7-1180-43ef-8ff0-ee10aaa82bfe,Namespace:kube-system,Attempt:0,} returns sandbox id \"f94a7deeaee11858fa57001e5b2b78c83791f3cbc53624b6f2e1831215e60d60\""
	Oct 19 23:54:16 pause-20211019235243-6808 containerd[2266]: time="2021-10-19T23:54:16.968083127Z" level=info msg="CreateContainer within sandbox \"f94a7deeaee11858fa57001e5b2b78c83791f3cbc53624b6f2e1831215e60d60\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	Oct 19 23:54:17 pause-20211019235243-6808 containerd[2266]: time="2021-10-19T23:54:17.025712973Z" level=info msg="CreateContainer within sandbox \"f94a7deeaee11858fa57001e5b2b78c83791f3cbc53624b6f2e1831215e60d60\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"7141c9d1bc5b52d1c9ffb0a56bc1ea0894d5137cde80ff31a3e5d2e881ad0ddf\""
	Oct 19 23:54:17 pause-20211019235243-6808 containerd[2266]: time="2021-10-19T23:54:17.026272194Z" level=info msg="StartContainer for \"7141c9d1bc5b52d1c9ffb0a56bc1ea0894d5137cde80ff31a3e5d2e881ad0ddf\""
	Oct 19 23:54:17 pause-20211019235243-6808 containerd[2266]: time="2021-10-19T23:54:17.177428190Z" level=info msg="StartContainer for \"7141c9d1bc5b52d1c9ffb0a56bc1ea0894d5137cde80ff31a3e5d2e881ad0ddf\" returns successfully"
	
	* 
	* ==> coredns [67717a61116e33321ebcbc26bb1066b02d77abca490421cdcda28484901dac38] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	CoreDNS-1.8.4
	linux/amd64, go1.16.4, 053c4d5
	
	* 
	* ==> describe nodes <==
	* Name:               pause-20211019235243-6808
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-20211019235243-6808
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4e20255d8fb01d11f0b16be976bdc5b5661029f6
	                    minikube.k8s.io/name=pause-20211019235243-6808
	                    minikube.k8s.io/updated_at=2021_10_19T23_53_34_0700
	                    minikube.k8s.io/version=v1.23.2
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 19 Oct 2021 23:53:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-20211019235243-6808
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 19 Oct 2021 23:54:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 19 Oct 2021 23:53:59 +0000   Tue, 19 Oct 2021 23:53:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 19 Oct 2021 23:53:59 +0000   Tue, 19 Oct 2021 23:53:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 19 Oct 2021 23:53:59 +0000   Tue, 19 Oct 2021 23:53:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 19 Oct 2021 23:53:59 +0000   Tue, 19 Oct 2021 23:53:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    pause-20211019235243-6808
	Capacity:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	System Info:
	  Machine ID:                 bba0be70c47c400ea3cf7733f1c0b4c1
	  System UUID:                53cd153d-c08e-408d-be46-7ceed59b6d66
	  Boot ID:                    3860adfd-184a-4be3-9030-9440e8dafeac
	  Kernel Version:             4.9.0-16-amd64
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.4.9
	  Kubelet Version:            v1.22.2
	  Kube-Proxy Version:         v1.22.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-78fcd69978-whpds                             100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     48s
	  kube-system                 etcd-pause-20211019235243-6808                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         59s
	  kube-system                 kindnet-hx6rh                                        100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      48s
	  kube-system                 kube-apiserver-pause-20211019235243-6808             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         59s
	  kube-system                 kube-controller-manager-pause-20211019235243-6808    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         59s
	  kube-system                 kube-proxy-4h4jt                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         48s
	  kube-system                 kube-scheduler-pause-20211019235243-6808             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         65s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From     Message
	  ----    ------                   ----               ----     -------
	  Normal  NodeHasSufficientMemory  80s (x5 over 80s)  kubelet  Node pause-20211019235243-6808 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    80s (x4 over 80s)  kubelet  Node pause-20211019235243-6808 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     80s (x4 over 80s)  kubelet  Node pause-20211019235243-6808 status is now: NodeHasSufficientPID
	  Normal  Starting                 60s                kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s                kubelet  Node pause-20211019235243-6808 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s                kubelet  Node pause-20211019235243-6808 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s                kubelet  Node pause-20211019235243-6808 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  59s                kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeReady                39s                kubelet  Node pause-20211019235243-6808 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.000002] ll header: 00000000: 02 42 2c 16 5e f6 02 42 c0 a8 31 02 08 00        .B,.^..B..1...
	[  +8.191389] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-df7d6eeea64b
	[  +0.000002] ll header: 00000000: 02 42 2c 16 5e f6 02 42 c0 a8 31 02 08 00        .B,.^..B..1...
	[Oct19 23:47] cgroup: cgroup2: unknown option "nsdelegate"
	[ +38.630908] IPv4: martian source 10.244.1.2 from 10.244.1.2, on dev vethea083f4d
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff c6 47 db ba ef ce 08 06        .......G......
	[Oct19 23:48] cgroup: cgroup2: unknown option "nsdelegate"
	[ +51.869070] cgroup: cgroup2: unknown option "nsdelegate"
	[Oct19 23:50] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethb4a33fa2
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 4a 54 bf 4a c5 f6 08 06        ......JT.J....
	[Oct19 23:51] cgroup: cgroup2: unknown option "nsdelegate"
	[ +51.597736] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth7849b5fd
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff d6 fb 2b ea 57 cf 08 06        ........+.W...
	[Oct19 23:52] cgroup: cgroup2: unknown option "nsdelegate"
	[ +14.224159] cgroup: cgroup2: unknown option "nsdelegate"
	[  +0.007127] cgroup: cgroup2: unknown option "nsdelegate"
	[  +1.023343] cgroup: cgroup2: unknown option "nsdelegate"
	[Oct19 23:53] cgroup: cgroup2: unknown option "nsdelegate"
	[ +31.773959] cgroup: cgroup2: unknown option "nsdelegate"
	[ +18.505037] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethc63035a3
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 5e de 92 2e 38 65 08 06        ......^...8e..
	[Oct19 23:54] cgroup: cgroup2: unknown option "nsdelegate"
	[ +12.750622] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth79402e87
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff d2 fd dd 02 d0 a0 08 06        ..............
	[ +21.283240] cgroup: cgroup2: unknown option "nsdelegate"
	
	* 
	* ==> etcd [5b9dd31e0a54a2ef258e3644d6aeb964a53c1c5cbea1f4c81d34ec48a98fd995] <==
	* {"level":"info","ts":"2021-10-19T23:54:34.422Z","caller":"traceutil/trace.go:171","msg":"trace[344966157] transaction","detail":"{read_only:false; response_revision:505; number_of_response:1; }","duration":"2.046584727s","start":"2021-10-19T23:54:32.375Z","end":"2021-10-19T23:54:34.422Z","steps":["trace[344966157] 'process raft request'  (duration: 534.697953ms)","trace[344966157] 'compare'  (duration: 1.511600354s)"],"step_count":2}
	{"level":"warn","ts":"2021-10-19T23:54:34.422Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"2.046547708s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/flowschemas/kube-controller-manager\" ","response":"range_response_count:1 size:1139"}
	{"level":"warn","ts":"2021-10-19T23:54:34.422Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-10-19T23:54:32.375Z","time spent":"2.046678401s","remote":"127.0.0.1:32788","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":118,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.49.2\" mod_revision:500 > success:<request_put:<key:\"/registry/masterleases/192.168.49.2\" value_size:67 lease:8128008432547652439 >> failure:<request_range:<key:\"/registry/masterleases/192.168.49.2\" > >"}
	{"level":"warn","ts":"2021-10-19T23:54:34.422Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.475514876s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" ","response":"range_response_count:1 size:4880"}
	{"level":"info","ts":"2021-10-19T23:54:34.422Z","caller":"traceutil/trace.go:171","msg":"trace[1107180228] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:1; response_revision:505; }","duration":"1.475553514s","start":"2021-10-19T23:54:32.947Z","end":"2021-10-19T23:54:34.422Z","steps":["trace[1107180228] 'agreement among raft nodes before linearized reading'  (duration: 1.475475784s)"],"step_count":1}
	{"level":"warn","ts":"2021-10-19T23:54:34.422Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"688.161593ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitions0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2021-10-19T23:54:34.422Z","caller":"traceutil/trace.go:171","msg":"trace[580604486] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions/; range_end:/registry/apiextensions.k8s.io/customresourcedefinitions0; response_count:0; response_revision:505; }","duration":"688.186046ms","start":"2021-10-19T23:54:33.734Z","end":"2021-10-19T23:54:34.422Z","steps":["trace[580604486] 'agreement among raft nodes before linearized reading'  (duration: 688.150914ms)"],"step_count":1}
	{"level":"warn","ts":"2021-10-19T23:54:34.422Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-10-19T23:54:32.947Z","time spent":"1.475606016s","remote":"127.0.0.1:32810","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":1,"response size":4904,"request content":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" "}
	{"level":"warn","ts":"2021-10-19T23:54:34.422Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-10-19T23:54:33.734Z","time spent":"688.22531ms","remote":"127.0.0.1:32784","response type":"/etcdserverpb.KV/Range","request count":0,"request size":120,"response count":0,"response size":29,"request content":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitions0\" count_only:true "}
	{"level":"warn","ts":"2021-10-19T23:54:34.422Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"733.747052ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1127"}
	{"level":"info","ts":"2021-10-19T23:54:34.422Z","caller":"traceutil/trace.go:171","msg":"trace[1263557950] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:505; }","duration":"733.785103ms","start":"2021-10-19T23:54:33.689Z","end":"2021-10-19T23:54:34.422Z","steps":["trace[1263557950] 'agreement among raft nodes before linearized reading'  (duration: 733.715029ms)"],"step_count":1}
	{"level":"warn","ts":"2021-10-19T23:54:34.422Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-10-19T23:54:33.689Z","time spent":"733.825261ms","remote":"127.0.0.1:32808","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1151,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"info","ts":"2021-10-19T23:54:34.422Z","caller":"traceutil/trace.go:171","msg":"trace[1798301214] range","detail":"{range_begin:/registry/flowschemas/kube-controller-manager; range_end:; response_count:1; response_revision:505; }","duration":"2.046580549s","start":"2021-10-19T23:54:32.376Z","end":"2021-10-19T23:54:34.422Z","steps":["trace[1798301214] 'agreement among raft nodes before linearized reading'  (duration: 2.04647783s)"],"step_count":1}
	{"level":"warn","ts":"2021-10-19T23:54:34.423Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-10-19T23:54:32.376Z","time spent":"2.047000599s","remote":"127.0.0.1:32934","response type":"/etcdserverpb.KV/Range","request count":0,"request size":47,"response count":1,"response size":1163,"request content":"key:\"/registry/flowschemas/kube-controller-manager\" "}
	{"level":"warn","ts":"2021-10-19T23:54:34.932Z","caller":"etcdserver/v3_server.go:815","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128008432547652451,"retry-timeout":"500ms"}
	{"level":"info","ts":"2021-10-19T23:54:35.101Z","caller":"traceutil/trace.go:171","msg":"trace[1518979986] linearizableReadLoop","detail":"{readStateIndex:529; appliedIndex:529; }","duration":"669.478298ms","start":"2021-10-19T23:54:34.432Z","end":"2021-10-19T23:54:35.101Z","steps":["trace[1518979986] 'read index received'  (duration: 669.468834ms)","trace[1518979986] 'applied index is now lower than readState.Index'  (duration: 8.219µs)"],"step_count":2}
	{"level":"warn","ts":"2021-10-19T23:54:35.101Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"605.086392ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/podsecuritypolicy/\" range_end:\"/registry/podsecuritypolicy0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2021-10-19T23:54:35.101Z","caller":"traceutil/trace.go:171","msg":"trace[1020965462] range","detail":"{range_begin:/registry/podsecuritypolicy/; range_end:/registry/podsecuritypolicy0; response_count:0; response_revision:506; }","duration":"605.144778ms","start":"2021-10-19T23:54:34.496Z","end":"2021-10-19T23:54:35.101Z","steps":["trace[1020965462] 'agreement among raft nodes before linearized reading'  (duration: 605.063501ms)"],"step_count":1}
	{"level":"warn","ts":"2021-10-19T23:54:35.101Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-10-19T23:54:34.496Z","time spent":"605.188475ms","remote":"127.0.0.1:32880","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":0,"response size":29,"request content":"key:\"/registry/podsecuritypolicy/\" range_end:\"/registry/podsecuritypolicy0\" count_only:true "}
	{"level":"warn","ts":"2021-10-19T23:54:35.102Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"413.288669ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/\" range_end:\"/registry/pods0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2021-10-19T23:54:35.102Z","caller":"traceutil/trace.go:171","msg":"trace[1594912121] range","detail":"{range_begin:/registry/pods/; range_end:/registry/pods0; response_count:0; response_revision:506; }","duration":"413.324235ms","start":"2021-10-19T23:54:34.688Z","end":"2021-10-19T23:54:35.102Z","steps":["trace[1594912121] 'agreement among raft nodes before linearized reading'  (duration: 413.267928ms)"],"step_count":1}
	{"level":"warn","ts":"2021-10-19T23:54:35.102Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-10-19T23:54:34.688Z","time spent":"413.362724ms","remote":"127.0.0.1:32812","response type":"/etcdserverpb.KV/Range","request count":0,"request size":36,"response count":8,"response size":31,"request content":"key:\"/registry/pods/\" range_end:\"/registry/pods0\" count_only:true "}
	{"level":"warn","ts":"2021-10-19T23:54:35.101Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"669.614661ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/prioritylevelconfigurations/node-high\" ","response":"range_response_count:1 size:694"}
	{"level":"info","ts":"2021-10-19T23:54:35.102Z","caller":"traceutil/trace.go:171","msg":"trace[264759016] range","detail":"{range_begin:/registry/prioritylevelconfigurations/node-high; range_end:; response_count:1; response_revision:506; }","duration":"670.013343ms","start":"2021-10-19T23:54:34.432Z","end":"2021-10-19T23:54:35.102Z","steps":["trace[264759016] 'agreement among raft nodes before linearized reading'  (duration: 669.553083ms)"],"step_count":1}
	{"level":"warn","ts":"2021-10-19T23:54:35.102Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-10-19T23:54:34.432Z","time spent":"670.228925ms","remote":"127.0.0.1:32936","response type":"/etcdserverpb.KV/Range","request count":0,"request size":49,"response count":1,"response size":718,"request content":"key:\"/registry/prioritylevelconfigurations/node-high\" "}
	
	* 
	* ==> kernel <==
	*  23:54:38 up  1:37,  0 users,  load average: 6.81, 3.06, 1.61
	Linux pause-20211019235243-6808 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [f7fd5d2b9867844d78c46d232049ec663270016db2809ec5dc7cff7b64ed42bb] <==
	* Trace[1120160651]: ---"About to write a response" 901ms (23:54:31.678)
	Trace[1120160651]: [901.340618ms] [901.340618ms] END
	I1019 23:54:31.678965       1 trace.go:205] Trace[1538620247]: "Get" url:/apis/flowcontrol.apiserver.k8s.io/v1beta1/flowschemas/system-nodes,user-agent:kube-apiserver/v1.22.2 (linux/amd64) kubernetes/8b5a191,audit-id:6c0d03ab-95b5-4aa5-a033-8a9a2e317208,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (19-Oct-2021 23:54:30.913) (total time: 765ms):
	Trace[1538620247]: ---"About to write a response" 765ms (23:54:31.678)
	Trace[1538620247]: [765.65658ms] [765.65658ms] END
	I1019 23:54:32.372486       1 trace.go:205] Trace[878600797]: "Get" url:/apis/flowcontrol.apiserver.k8s.io/v1beta1/flowschemas/probes,user-agent:kube-apiserver/v1.22.2 (linux/amd64) kubernetes/8b5a191,audit-id:27dd69d1-c900-4984-97fe-3da47a846bd5,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (19-Oct-2021 23:54:31.683) (total time: 689ms):
	Trace[878600797]: ---"About to write a response" 689ms (23:54:32.372)
	Trace[878600797]: [689.380214ms] [689.380214ms] END
	I1019 23:54:34.423114       1 trace.go:205] Trace[1879905114]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (19-Oct-2021 23:54:32.374) (total time: 2048ms):
	Trace[1879905114]: ---"Transaction committed" 2047ms (23:54:34.423)
	Trace[1879905114]: [2.048882202s] [2.048882202s] END
	I1019 23:54:34.423264       1 trace.go:205] Trace[1561096213]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:0,continue: (19-Oct-2021 23:54:32.946) (total time: 1476ms):
	Trace[1561096213]: [1.476610385s] [1.476610385s] END
	I1019 23:54:34.423410       1 trace.go:205] Trace[1696360612]: "Get" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:bb76f560-778c-496c-944f-00a8361a4a5f,client:192.168.49.2,accept:application/json, */*,protocol:HTTP/2.0 (19-Oct-2021 23:54:33.688) (total time: 734ms):
	Trace[1696360612]: ---"About to write a response" 734ms (23:54:34.423)
	Trace[1696360612]: [734.719034ms] [734.719034ms] END
	I1019 23:54:34.423462       1 trace.go:205] Trace[1050674558]: "Get" url:/apis/flowcontrol.apiserver.k8s.io/v1beta1/flowschemas/kube-controller-manager,user-agent:kube-apiserver/v1.22.2 (linux/amd64) kubernetes/8b5a191,audit-id:0634962f-282a-4198-b952-65f9856dc45b,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (19-Oct-2021 23:54:32.375) (total time: 2047ms):
	Trace[1050674558]: ---"About to write a response" 2047ms (23:54:34.423)
	Trace[1050674558]: [2.047631461s] [2.047631461s] END
	I1019 23:54:34.423700       1 trace.go:205] Trace[137594181]: "List" url:/api/v1/nodes,user-agent:kindnetd/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:7239fe15-d74b-4f0f-9d72-4691a6139ffc,client:192.168.49.2,accept:application/json, */*,protocol:HTTP/2.0 (19-Oct-2021 23:54:32.946) (total time: 1477ms):
	Trace[137594181]: ---"Listing from storage done" 1476ms (23:54:34.423)
	Trace[137594181]: [1.477065935s] [1.477065935s] END
	I1019 23:54:35.102951       1 trace.go:205] Trace[156861049]: "Get" url:/apis/flowcontrol.apiserver.k8s.io/v1beta1/prioritylevelconfigurations/node-high,user-agent:kube-apiserver/v1.22.2 (linux/amd64) kubernetes/8b5a191,audit-id:b736529f-3af5-423e-8646-f62e2f0ae567,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (19-Oct-2021 23:54:34.431) (total time: 671ms):
	Trace[156861049]: ---"About to write a response" 670ms (23:54:35.102)
	Trace[156861049]: [671.027828ms] [671.027828ms] END
	
	* 
	* ==> kube-controller-manager [c17f0c5af3c7c5c2a0f2ed0bf2a1d0654a71550afd751c6a0e7f736234423cef] <==
	* I1019 23:53:49.242542       1 shared_informer.go:247] Caches are synced for daemon sets 
	I1019 23:53:49.254293       1 shared_informer.go:247] Caches are synced for certificate-csrapproving 
	I1019 23:53:49.349271       1 shared_informer.go:247] Caches are synced for PVC protection 
	I1019 23:53:49.371111       1 shared_informer.go:247] Caches are synced for expand 
	I1019 23:53:49.384338       1 shared_informer.go:247] Caches are synced for disruption 
	I1019 23:53:49.384363       1 disruption.go:371] Sending events to api server.
	I1019 23:53:49.395961       1 shared_informer.go:247] Caches are synced for resource quota 
	I1019 23:53:49.405329       1 shared_informer.go:247] Caches are synced for stateful set 
	I1019 23:53:49.411590       1 shared_informer.go:247] Caches are synced for ephemeral 
	I1019 23:53:49.430378       1 shared_informer.go:247] Caches are synced for attach detach 
	I1019 23:53:49.434972       1 shared_informer.go:247] Caches are synced for persistent volume 
	I1019 23:53:49.440838       1 shared_informer.go:247] Caches are synced for resource quota 
	I1019 23:53:49.472196       1 shared_informer.go:247] Caches are synced for endpoint 
	I1019 23:53:49.480582       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I1019 23:53:49.640046       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-78fcd69978 to 2"
	I1019 23:53:49.856860       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-78fcd69978 to 1"
	I1019 23:53:49.873856       1 shared_informer.go:247] Caches are synced for garbage collector 
	I1019 23:53:49.892712       1 shared_informer.go:247] Caches are synced for garbage collector 
	I1019 23:53:49.892730       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1019 23:53:50.010599       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-4h4jt"
	I1019 23:53:50.010625       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-hx6rh"
	I1019 23:53:50.141382       1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-78fcd69978-v58hq"
	I1019 23:53:50.152234       1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-78fcd69978-whpds"
	I1019 23:53:50.171778       1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-78fcd69978-v58hq"
	I1019 23:53:59.191852       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	
	* 
	* ==> kube-proxy [5f570741dc1021db804ad8506cb914c8399cc863b2f79d81a1f5c9284ed89103] <==
	* I1019 23:53:52.186040       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I1019 23:53:52.186111       1 server_others.go:140] Detected node IP 192.168.49.2
	W1019 23:53:52.186140       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
	I1019 23:53:52.205748       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I1019 23:53:52.205781       1 server_others.go:212] Using iptables Proxier.
	I1019 23:53:52.205795       1 server_others.go:219] creating dualStackProxier for iptables.
	W1019 23:53:52.205808       1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I1019 23:53:52.206163       1 server.go:649] Version: v1.22.2
	I1019 23:53:52.206860       1 config.go:224] Starting endpoint slice config controller
	I1019 23:53:52.206880       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I1019 23:53:52.206902       1 config.go:315] Starting service config controller
	I1019 23:53:52.206907       1 shared_informer.go:240] Waiting for caches to sync for service config
	E1019 23:53:52.209064       1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pause-20211019235243-6808.16af93bd9469d598", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc053f4840c542e89, ext:66914097, loc:(*time.Location)(0x2d81340)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-pause-20211019235243-6808", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"pause-2021
1019235243-6808", UID:"pause-20211019235243-6808", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "pause-20211019235243-6808.16af93bd9469d598" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
	I1019 23:53:52.307726       1 shared_informer.go:247] Caches are synced for service config 
	I1019 23:53:52.307739       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [7ff3672f9c090d8f1094f7cedfac0961c84921065abf3237aa7088c62e868465] <==
	* I1019 23:53:30.634382       1 secure_serving.go:200] Serving securely on 127.0.0.1:10259
	I1019 23:53:30.634734       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 23:53:30.634890       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1019 23:53:30.635016       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E1019 23:53:30.644239       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1019 23:53:30.644325       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1019 23:53:30.644396       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1019 23:53:30.644452       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1019 23:53:30.644512       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1019 23:53:30.644569       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1019 23:53:30.644626       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1019 23:53:30.644688       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1019 23:53:30.644740       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1019 23:53:30.644798       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1019 23:53:30.644848       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1019 23:53:30.660041       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1019 23:53:30.663078       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1019 23:53:30.663189       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1019 23:53:30.669323       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1019 23:53:31.538543       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1019 23:53:31.654098       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1019 23:53:31.752887       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1019 23:53:31.827303       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1019 23:53:31.951665       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1019 23:53:34.235540       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2021-10-19 23:52:47 UTC, end at Tue 2021-10-19 23:54:38 UTC. --
	Oct 19 23:54:20 pause-20211019235243-6808 kubelet[2928]: Flag --network-plugin has been deprecated, will be removed along with dockershim.
	Oct 19 23:54:20 pause-20211019235243-6808 kubelet[2928]: Flag --runtime-request-timeout has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Oct 19 23:54:20 pause-20211019235243-6808 kubelet[2928]: Flag --cni-conf-dir has been deprecated, will be removed along with dockershim.
	Oct 19 23:54:20 pause-20211019235243-6808 kubelet[2928]: Flag --network-plugin has been deprecated, will be removed along with dockershim.
	Oct 19 23:54:20 pause-20211019235243-6808 kubelet[2928]: Flag --runtime-request-timeout has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Oct 19 23:54:20 pause-20211019235243-6808 kubelet[2928]: I1019 23:54:20.250672    2928 server.go:440] "Kubelet version" kubeletVersion="v1.22.2"
	Oct 19 23:54:20 pause-20211019235243-6808 kubelet[2928]: I1019 23:54:20.251020    2928 server.go:868] "Client rotation is on, will bootstrap in background"
	Oct 19 23:54:20 pause-20211019235243-6808 kubelet[2928]: I1019 23:54:20.253116    2928 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Oct 19 23:54:20 pause-20211019235243-6808 kubelet[2928]: I1019 23:54:20.254069    2928 dynamic_cafile_content.go:155] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 19 23:54:20 pause-20211019235243-6808 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Oct 19 23:54:20 pause-20211019235243-6808 kubelet[2928]: I1019 23:54:20.616542    2928 dynamic_cafile_content.go:170] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 19 23:54:25 pause-20211019235243-6808 kubelet[2928]: I1019 23:54:25.265601    2928 server.go:687] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	Oct 19 23:54:25 pause-20211019235243-6808 kubelet[2928]: I1019 23:54:25.265820    2928 container_manager_linux.go:280] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	Oct 19 23:54:25 pause-20211019235243-6808 kubelet[2928]: I1019 23:54:25.265870    2928 container_manager_linux.go:285] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
	Oct 19 23:54:25 pause-20211019235243-6808 kubelet[2928]: I1019 23:54:25.265892    2928 topology_manager.go:133] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
	Oct 19 23:54:25 pause-20211019235243-6808 kubelet[2928]: I1019 23:54:25.265901    2928 container_manager_linux.go:320] "Creating device plugin manager" devicePluginEnabled=true
	Oct 19 23:54:25 pause-20211019235243-6808 kubelet[2928]: I1019 23:54:25.265934    2928 state_mem.go:36] "Initialized new in-memory state store"
	Oct 19 23:54:25 pause-20211019235243-6808 kubelet[2928]: I1019 23:54:25.266186    2928 kubelet.go:418] "Attempting to sync node with API server"
	Oct 19 23:54:25 pause-20211019235243-6808 kubelet[2928]: I1019 23:54:25.266202    2928 kubelet.go:279] "Adding static pod path" path="/etc/kubernetes/manifests"
	Oct 19 23:54:25 pause-20211019235243-6808 kubelet[2928]: I1019 23:54:25.266222    2928 kubelet.go:290] "Adding apiserver pod source"
	Oct 19 23:54:25 pause-20211019235243-6808 kubelet[2928]: I1019 23:54:25.266233    2928 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	Oct 19 23:54:25 pause-20211019235243-6808 kubelet[2928]: I1019 23:54:25.317930    2928 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.4.9" apiVersion="v1alpha2"
	Oct 19 23:54:25 pause-20211019235243-6808 kubelet[2928]: I1019 23:54:25.318490    2928 server.go:1213] "Started kubelet"
	Oct 19 23:54:25 pause-20211019235243-6808 systemd[1]: kubelet.service: Succeeded.
	Oct 19 23:54:25 pause-20211019235243-6808 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> storage-provisioner [7141c9d1bc5b52d1c9ffb0a56bc1ea0894d5137cde80ff31a3e5d2e881ad0ddf] <==
	* I1019 23:54:17.185090       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1019 23:54:17.195505       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1019 23:54:17.195555       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1019 23:54:17.215068       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1019 23:54:17.215216       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-20211019235243-6808_d42f5349-b82a-4832-ae4c-2524ab6815c4!
	I1019 23:54:17.216650       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0ad7c128-ef1e-4334-919c-56318effd6f3", APIVersion:"v1", ResourceVersion:"495", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-20211019235243-6808_d42f5349-b82a-4832-ae4c-2524ab6815c4 became leader
	I1019 23:54:17.315700       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-20211019235243-6808_d42f5349-b82a-4832-ae4c-2524ab6815c4!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-20211019235243-6808 -n pause-20211019235243-6808
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-20211019235243-6808 -n pause-20211019235243-6808: exit status 2 (421.275974ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:261: (dbg) Run:  kubectl --context pause-20211019235243-6808 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: 
helpers_test.go:272: ======> post-mortem[TestPause/serial/PauseAgain]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context pause-20211019235243-6808 describe pod 
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context pause-20211019235243-6808 describe pod : exit status 1 (55.093194ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context pause-20211019235243-6808 describe pod : exit status 1
--- FAIL: TestPause/serial/PauseAgain (19.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (6.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-20211020000119-6808 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-20211020000119-6808 --alsologtostderr -v=1: exit status 80 (2.239932881s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-20211020000119-6808 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 00:11:50.458416  263652 out.go:297] Setting OutFile to fd 1 ...
	I1020 00:11:50.458504  263652 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1020 00:11:50.458510  263652 out.go:310] Setting ErrFile to fd 2...
	I1020 00:11:50.458517  263652 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1020 00:11:50.458640  263652 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/bin
	I1020 00:11:50.458821  263652 out.go:304] Setting JSON to false
	I1020 00:11:50.458844  263652 mustload.go:65] Loading cluster: embed-certs-20211020000119-6808
	I1020 00:11:50.459168  263652 config.go:176] Loaded profile config "embed-certs-20211020000119-6808": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.2
	I1020 00:11:50.459550  263652 cli_runner.go:115] Run: docker container inspect embed-certs-20211020000119-6808 --format={{.State.Status}}
	I1020 00:11:50.498668  263652 host.go:66] Checking if "embed-certs-20211020000119-6808" exists ...
	I1020 00:11:50.498936  263652 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1020 00:11:50.577667  263652 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:188 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2021-10-20 00:11:50.535130414 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1020 00:11:50.578135  263652 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.27-1633027942-12081@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime:docker cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false)
host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.99.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/12081/minikube-v1.23.1-1633115168-12081.iso https://github.com/kubernetes/minikube/releases/download/v1.23.1-1633115168-12081/minikube-v1.23.1-1633115168-12081.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.23.1-1633115168-12081.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool
=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-20211020000119-6808 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1020 00:11:50.581036  263652 out.go:176] * Pausing node embed-certs-20211020000119-6808 ... 
	I1020 00:11:50.581062  263652 host.go:66] Checking if "embed-certs-20211020000119-6808" exists ...
	I1020 00:11:50.581290  263652 ssh_runner.go:152] Run: systemctl --version
	I1020 00:11:50.581327  263652 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211020000119-6808
	I1020 00:11:50.621282  263652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33030 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/embed-certs-20211020000119-6808/id_rsa Username:docker}
	I1020 00:11:50.695433  263652 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
	I1020 00:11:50.703789  263652 pause.go:50] kubelet running: true
	I1020 00:11:50.703872  263652 ssh_runner.go:152] Run: sudo systemctl disable --now kubelet
	I1020 00:11:50.818452  263652 retry.go:31] will retry after 276.165072ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	I1020 00:11:51.094926  263652 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
	I1020 00:11:51.104509  263652 pause.go:50] kubelet running: true
	I1020 00:11:51.104571  263652 ssh_runner.go:152] Run: sudo systemctl disable --now kubelet
	I1020 00:11:51.212578  263652 retry.go:31] will retry after 540.190908ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	I1020 00:11:51.753287  263652 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
	I1020 00:11:51.762629  263652 pause.go:50] kubelet running: true
	I1020 00:11:51.762679  263652 ssh_runner.go:152] Run: sudo systemctl disable --now kubelet
	I1020 00:11:51.871449  263652 retry.go:31] will retry after 655.06503ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	I1020 00:11:52.526982  263652 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
	I1020 00:11:52.536676  263652 pause.go:50] kubelet running: true
	I1020 00:11:52.536738  263652 ssh_runner.go:152] Run: sudo systemctl disable --now kubelet
	I1020 00:11:52.645485  263652 out.go:176] 
	W1020 00:11:52.645622  263652 out.go:241] X Exiting due to GUEST_PAUSE: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	
	X Exiting due to GUEST_PAUSE: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	
	W1020 00:11:52.645643  263652 out.go:241] * 
	* 
	W1020 00:11:52.647795  263652 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1020 00:11:52.649198  263652 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:296: out/minikube-linux-amd64 pause -p embed-certs-20211020000119-6808 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20211020000119-6808
helpers_test.go:235: (dbg) docker inspect embed-certs-20211020000119-6808:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "08410c61420ae260367c4880b0ef514ae27629d876de421fc77e26ab59b14e27",
	        "Created": "2021-10-20T00:04:43.629101344Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 240862,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-10-20T00:06:11.513950317Z",
	            "FinishedAt": "2021-10-20T00:06:08.613519797Z"
	        },
	        "Image": "sha256:e2a6c047beddf8261495222adf87089305bbc18e350587b01ebe3725535b5871",
	        "ResolvConfPath": "/var/lib/docker/containers/08410c61420ae260367c4880b0ef514ae27629d876de421fc77e26ab59b14e27/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/08410c61420ae260367c4880b0ef514ae27629d876de421fc77e26ab59b14e27/hostname",
	        "HostsPath": "/var/lib/docker/containers/08410c61420ae260367c4880b0ef514ae27629d876de421fc77e26ab59b14e27/hosts",
	        "LogPath": "/var/lib/docker/containers/08410c61420ae260367c4880b0ef514ae27629d876de421fc77e26ab59b14e27/08410c61420ae260367c4880b0ef514ae27629d876de421fc77e26ab59b14e27-json.log",
	        "Name": "/embed-certs-20211020000119-6808",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-20211020000119-6808:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20211020000119-6808",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/60a6fc7494748a3904e300171350692c442bcc83bd423828b0e77d99e28810a9-init/diff:/var/lib/docker/overlay2/c5f02e45537672705e6e2aba01e28e51c2d46e9e49272769c2467ad01e3a8798/diff:/var/lib/docker/overlay2/0ea8f075729b71baf4ccd823eb0c485ef5ef29bac0353c53e6065129629066ba/diff:/var/lib/docker/overlay2/88533b581a0009f254ebbf2a93788f6fea438104765bb8cd83e8242f6f007976/diff:/var/lib/docker/overlay2/14bd4aa948d82ba23130a19a3ed9b4398666fdb3b6961d42dfbea97f89c5cc8d/diff:/var/lib/docker/overlay2/c45d302040f8b5fa2a3863603e33a4a8945fc269c7c7d5b5fed4c0af74e1a318/diff:/var/lib/docker/overlay2/895bf38a811e31bbe8c73359865eb48ad82bbb675c072510dec392e208b25255/diff:/var/lib/docker/overlay2/888eda45177c40a5e8c507f72bfcd0e63d9fb3fbf18304e303b30f7858d75385/diff:/var/lib/docker/overlay2/1ba6be2d4915efd99d70838596bc306c3a5986d1687b7b5c82e3bcce08f2739a/diff:/var/lib/docker/overlay2/dba40492bd25835ae3cae4014121bd7743a6df6c6b6f568b50a45660d36da1ee/diff:/var/lib/docker/overlay2/f5e2e9
f4ecf2e4c242e9a2d0f40f87313ae18433e49a64f8e612bc21342ba35a/diff:/var/lib/docker/overlay2/55c3bd632609db28efc8e436188a29be66e40c950a2f07ba9a636848bcf8fe54/diff:/var/lib/docker/overlay2/e743e9a12912c11573135dbbc7915737001fecbbddc57c8609bb97c48e477ab5/diff:/var/lib/docker/overlay2/0d2ab28eea343e33cf288e808e3583a4ef3a2d0f0e152cb4b6a696fe2829aa37/diff:/var/lib/docker/overlay2/a9d9379f41d6df50597861e1eca41e6abb25962190c7c4bf07f9b6cecd398ab8/diff:/var/lib/docker/overlay2/246f3a84f3d47634be292753e6ab114ff0fdd6c19174f98d20d3631b985bd65a/diff:/var/lib/docker/overlay2/1555899ff98b2aefb432f4f7413f555e1419a0c7f1ff8906ff527987be55f98f/diff:/var/lib/docker/overlay2/2046d76f6555966732657251ae8f140fcb5c9ba7becf9f4ea2f38d02553b81a8/diff:/var/lib/docker/overlay2/45481a57311300993df1fbf7c1affd06d3f1b0b4f3d4786b38e20ed6419a53d4/diff:/var/lib/docker/overlay2/3e461aa0d9087cb03e88430b7acc263fdb5b7c9685b89e24b2e636c0b764ff95/diff:/var/lib/docker/overlay2/fd0a889ad13f00ae1f69b1916a5927927a49c38948b00d54db2145eed106176b/diff:/var/lib/d
ocker/overlay2/b9ba91322e89a003f05f9d5f11671de5af7cda4550b6f60c9d4ff90724e4e6f9/diff:/var/lib/docker/overlay2/e778c4cc31a31336a7ba77c6088ba8748d9c140197750d9e6c90b127c01e530b/diff:/var/lib/docker/overlay2/95fbf604ebdd8cc367f642230ac0a68cbc72994b41f02db95f354c4fdc75af24/diff:/var/lib/docker/overlay2/3f2110c7a6beb28d94b23699e2ca6363aa4ea63d43035d2e539e47fcd331a2ba/diff:/var/lib/docker/overlay2/1bf0ef7d7a45ab3ea27aa3d2f6b34dce6e59fc18e7e464e718352e8ea7029fd8/diff:/var/lib/docker/overlay2/23c70a1791e26b5ae01282413205d77ba4ecd95b3e170ea40176e856715a9359/diff:/var/lib/docker/overlay2/d830811487929eab10415090b3d97a6d7865d450267b84aa4184392814d8b1df/diff:/var/lib/docker/overlay2/b36fdfbf0198d3321dcce07dba3a6910f2cbe2dc276d6530613f56cd94afcc31/diff:/var/lib/docker/overlay2/cc70ad4a43336c6e67e06e2c9e0a2f30140594f1db80424144ea2b077c4ab1af/diff:/var/lib/docker/overlay2/d728e7526151cbb5fa43fbde71e65b654ce3b44983512cd2e690c8798d0d3ef8/diff:/var/lib/docker/overlay2/d701d06f30bc941260abb1757ee832c9aa2195a14f1552bd9d4baf5e614
4d5ad/diff:/var/lib/docker/overlay2/6bd5ed3108886521103186b5318e03a8d90f0dbc80d27990929d83cd889e533d/diff:/var/lib/docker/overlay2/e27df6450133558da7555ef518af1a37ae6a07ce47c6767fec076ca914b56d50/diff:/var/lib/docker/overlay2/f820866aa600bd3aaac0b53c24ed1df712be2f6192e572cb551d67d8ed5f2ae8/diff:/var/lib/docker/overlay2/867ec5e44c5a5ca3a4bbcb5e3ebe23023fd7c7f42cc297619dd37fa19e58a4ce/diff:/var/lib/docker/overlay2/1de77d9dd95b7ad0b72d0e674dbf02cf40000662dc54842d9d003febd75e9183/diff:/var/lib/docker/overlay2/8249f5dae9e196bbeeca2cec8401a9bd3dc73ea9eb3884270fea296047dc72e1/diff:/var/lib/docker/overlay2/d1a43bd2c1485a2051100b28c50ca4afb530e7a9cace2b7ed1bb19098a8b1b6c/diff:/var/lib/docker/overlay2/e5626256f4126d2d314b1737c78f12ceabf819f05f933b8539d23c83ed360571/diff:/var/lib/docker/overlay2/0e28b1b6d42bc8ec33754e6a4d94556573199f71a1745d89b48ecf4e53c4b9d7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/60a6fc7494748a3904e300171350692c442bcc83bd423828b0e77d99e28810a9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/60a6fc7494748a3904e300171350692c442bcc83bd423828b0e77d99e28810a9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/60a6fc7494748a3904e300171350692c442bcc83bd423828b0e77d99e28810a9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20211020000119-6808",
	                "Source": "/var/lib/docker/volumes/embed-certs-20211020000119-6808/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20211020000119-6808",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.27-1633027942-12081@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20211020000119-6808",
	                "name.minikube.sigs.k8s.io": "embed-certs-20211020000119-6808",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1881fce6e1679a94d100bbef96cbc2b75b2a4c23759c2e76ac2bacde1b7d8285",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33030"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33029"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33026"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33028"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33027"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1881fce6e167",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20211020000119-6808": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "08410c61420a"
	                    ],
	                    "NetworkID": "470d197350e46e808401c61690f42374e6568e699bc44af734e51d09c76bf9a8",
	                    "EndpointID": "fc908795c7033ede9437e91166114d4e30e7cfe4c7a5c743ae27cb505235a271",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20211020000119-6808 -n embed-certs-20211020000119-6808
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-20211020000119-6808 logs -n 25
helpers_test.go:252: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                    Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| addons  | enable dashboard -p                               | default-k8s-different-port-20211020000323-6808 | jenkins | v1.23.2 | Wed, 20 Oct 2021 00:04:56 UTC | Wed, 20 Oct 2021 00:04:56 UTC |
	|         | default-k8s-different-port-20211020000323-6808    |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20211020000119-6808                | jenkins | v1.23.2 | Wed, 20 Oct 2021 00:01:19 UTC | Wed, 20 Oct 2021 00:05:38 UTC |
	|         | embed-certs-20211020000119-6808                   |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                                |         |         |                               |                               |
	|         | --driver=docker                                   |                                                |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.2                      |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | embed-certs-20211020000119-6808                | jenkins | v1.23.2 | Wed, 20 Oct 2021 00:05:47 UTC | Wed, 20 Oct 2021 00:05:48 UTC |
	|         | embed-certs-20211020000119-6808                   |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |         |                               |                               |
	| stop    | -p                                                | embed-certs-20211020000119-6808                | jenkins | v1.23.2 | Wed, 20 Oct 2021 00:05:48 UTC | Wed, 20 Oct 2021 00:06:09 UTC |
	|         | embed-certs-20211020000119-6808                   |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                               | embed-certs-20211020000119-6808                | jenkins | v1.23.2 | Wed, 20 Oct 2021 00:06:09 UTC | Wed, 20 Oct 2021 00:06:09 UTC |
	|         | embed-certs-20211020000119-6808                   |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                               |                               |
	| start   | -p no-preload-20211020000033-6808                 | no-preload-20211020000033-6808                 | jenkins | v1.23.2 | Wed, 20 Oct 2021 00:02:36 UTC | Wed, 20 Oct 2021 00:08:03 UTC |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                                |         |         |                               |                               |
	|         | --driver=docker                                   |                                                |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.3-rc.0                 |                                                |         |         |                               |                               |
	| ssh     | -p                                                | no-preload-20211020000033-6808                 | jenkins | v1.23.2 | Wed, 20 Oct 2021 00:08:14 UTC | Wed, 20 Oct 2021 00:08:14 UTC |
	|         | no-preload-20211020000033-6808                    |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                |         |         |                               |                               |
	| pause   | -p                                                | no-preload-20211020000033-6808                 | jenkins | v1.23.2 | Wed, 20 Oct 2021 00:08:14 UTC | Wed, 20 Oct 2021 00:08:15 UTC |
	|         | no-preload-20211020000033-6808                    |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=1                            |                                                |         |         |                               |                               |
	| unpause | -p                                                | no-preload-20211020000033-6808                 | jenkins | v1.23.2 | Wed, 20 Oct 2021 00:08:15 UTC | Wed, 20 Oct 2021 00:08:16 UTC |
	|         | no-preload-20211020000033-6808                    |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=1                            |                                                |         |         |                               |                               |
	| delete  | -p                                                | no-preload-20211020000033-6808                 | jenkins | v1.23.2 | Wed, 20 Oct 2021 00:08:17 UTC | Wed, 20 Oct 2021 00:08:20 UTC |
	|         | no-preload-20211020000033-6808                    |                                                |         |         |                               |                               |
	| delete  | -p                                                | no-preload-20211020000033-6808                 | jenkins | v1.23.2 | Wed, 20 Oct 2021 00:08:21 UTC | Wed, 20 Oct 2021 00:08:21 UTC |
	|         | no-preload-20211020000033-6808                    |                                                |         |         |                               |                               |
	| start   | -p                                                | old-k8s-version-20211020000023-6808            | jenkins | v1.23.2 | Wed, 20 Oct 2021 00:02:45 UTC | Wed, 20 Oct 2021 00:10:20 UTC |
	|         | old-k8s-version-20211020000023-6808               |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                 |                                                |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                |         |         |                               |                               |
	|         | --disable-driver-mounts                           |                                                |         |         |                               |                               |
	|         | --keep-context=false                              |                                                |         |         |                               |                               |
	|         | --driver=docker                                   |                                                |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                                |         |         |                               |                               |
	| start   | -p                                                | default-k8s-different-port-20211020000323-6808 | jenkins | v1.23.2 | Wed, 20 Oct 2021 00:04:56 UTC | Wed, 20 Oct 2021 00:10:23 UTC |
	|         | default-k8s-different-port-20211020000323-6808    |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                |         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=docker             |                                                |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.2                      |                                                |         |         |                               |                               |
	| ssh     | -p                                                | old-k8s-version-20211020000023-6808            | jenkins | v1.23.2 | Wed, 20 Oct 2021 00:10:31 UTC | Wed, 20 Oct 2021 00:10:31 UTC |
	|         | old-k8s-version-20211020000023-6808               |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                |         |         |                               |                               |
	| pause   | -p                                                | old-k8s-version-20211020000023-6808            | jenkins | v1.23.2 | Wed, 20 Oct 2021 00:10:31 UTC | Wed, 20 Oct 2021 00:10:32 UTC |
	|         | old-k8s-version-20211020000023-6808               |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=1                            |                                                |         |         |                               |                               |
	| unpause | -p                                                | old-k8s-version-20211020000023-6808            | jenkins | v1.23.2 | Wed, 20 Oct 2021 00:10:33 UTC | Wed, 20 Oct 2021 00:10:33 UTC |
	|         | old-k8s-version-20211020000023-6808               |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=1                            |                                                |         |         |                               |                               |
	| ssh     | -p                                                | default-k8s-different-port-20211020000323-6808 | jenkins | v1.23.2 | Wed, 20 Oct 2021 00:10:34 UTC | Wed, 20 Oct 2021 00:10:34 UTC |
	|         | default-k8s-different-port-20211020000323-6808    |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                |         |         |                               |                               |
	| pause   | -p                                                | default-k8s-different-port-20211020000323-6808 | jenkins | v1.23.2 | Wed, 20 Oct 2021 00:10:34 UTC | Wed, 20 Oct 2021 00:10:35 UTC |
	|         | default-k8s-different-port-20211020000323-6808    |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=1                            |                                                |         |         |                               |                               |
	| unpause | -p                                                | default-k8s-different-port-20211020000323-6808 | jenkins | v1.23.2 | Wed, 20 Oct 2021 00:10:36 UTC | Wed, 20 Oct 2021 00:10:37 UTC |
	|         | default-k8s-different-port-20211020000323-6808    |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=1                            |                                                |         |         |                               |                               |
	| delete  | -p                                                | old-k8s-version-20211020000023-6808            | jenkins | v1.23.2 | Wed, 20 Oct 2021 00:10:34 UTC | Wed, 20 Oct 2021 00:10:37 UTC |
	|         | old-k8s-version-20211020000023-6808               |                                                |         |         |                               |                               |
	| delete  | -p                                                | old-k8s-version-20211020000023-6808            | jenkins | v1.23.2 | Wed, 20 Oct 2021 00:10:38 UTC | Wed, 20 Oct 2021 00:10:38 UTC |
	|         | old-k8s-version-20211020000023-6808               |                                                |         |         |                               |                               |
	| delete  | -p                                                | default-k8s-different-port-20211020000323-6808 | jenkins | v1.23.2 | Wed, 20 Oct 2021 00:10:38 UTC | Wed, 20 Oct 2021 00:10:41 UTC |
	|         | default-k8s-different-port-20211020000323-6808    |                                                |         |         |                               |                               |
	| delete  | -p                                                | default-k8s-different-port-20211020000323-6808 | jenkins | v1.23.2 | Wed, 20 Oct 2021 00:10:41 UTC | Wed, 20 Oct 2021 00:10:41 UTC |
	|         | default-k8s-different-port-20211020000323-6808    |                                                |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20211020000119-6808                | jenkins | v1.23.2 | Wed, 20 Oct 2021 00:06:09 UTC | Wed, 20 Oct 2021 00:11:39 UTC |
	|         | embed-certs-20211020000119-6808                   |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                                |         |         |                               |                               |
	|         | --driver=docker                                   |                                                |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.2                      |                                                |         |         |                               |                               |
	| ssh     | -p                                                | embed-certs-20211020000119-6808                | jenkins | v1.23.2 | Wed, 20 Oct 2021 00:11:50 UTC | Wed, 20 Oct 2021 00:11:50 UTC |
	|         | embed-certs-20211020000119-6808                   |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                |         |         |                               |                               |
	|---------|---------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/10/20 00:06:09
	Running on machine: debian-jenkins-agent-14
	Binary: Built with gc go1.17.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1020 00:06:09.302430  240589 out.go:297] Setting OutFile to fd 1 ...
	I1020 00:06:09.302620  240589 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1020 00:06:09.302629  240589 out.go:310] Setting ErrFile to fd 2...
	I1020 00:06:09.302633  240589 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1020 00:06:09.302720  240589 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/bin
	I1020 00:06:09.302914  240589 out.go:304] Setting JSON to false
	I1020 00:06:09.339174  240589 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":6531,"bootTime":1634681838,"procs":343,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I1020 00:06:09.339278  240589 start.go:121] virtualization: kvm guest
	I1020 00:06:09.341804  240589 out.go:176] * [embed-certs-20211020000119-6808] minikube v1.23.2 on Debian 9.13 (kvm/amd64)
	I1020 00:06:09.343407  240589 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/kubeconfig
	I1020 00:06:09.341973  240589 notify.go:169] Checking for updates...
	I1020 00:06:09.344720  240589 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1020 00:06:09.346112  240589 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube
	I1020 00:06:09.347535  240589 out.go:176]   - MINIKUBE_LOCATION=12739
	I1020 00:06:09.347945  240589 config.go:176] Loaded profile config "embed-certs-20211020000119-6808": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.2
	I1020 00:06:09.348356  240589 driver.go:343] Setting default libvirt URI to qemu:///system
	I1020 00:06:09.404187  240589 docker.go:132] docker version: linux-19.03.15
	I1020 00:06:09.404270  240589 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1020 00:06:09.487171  240589 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:188 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:true NGoroutines:50 SystemTime:2021-10-20 00:06:09.441923152 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1020 00:06:09.487249  240589 docker.go:237] overlay module found
	I1020 00:06:09.489177  240589 out.go:176] * Using the docker driver based on existing profile
	I1020 00:06:09.489205  240589 start.go:278] selected driver: docker
	I1020 00:06:09.489212  240589 start.go:751] validating driver "docker" against &{Name:embed-certs-20211020000119-6808 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.27-1633027942-12081@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.2 ClusterName:embed-certs-20211020000119-6808 Namespace:default APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.22.2 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s
ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host}
	I1020 00:06:09.489333  240589 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W1020 00:06:09.489374  240589 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W1020 00:06:09.489397  240589 out.go:241] ! Your cgroup does not allow setting memory.
	I1020 00:06:09.491002  240589 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I1020 00:06:09.491875  240589 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1020 00:06:09.572869  240589 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:188 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:true NGoroutines:50 SystemTime:2021-10-20 00:06:09.5291671 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddres
s:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warning
s:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	W1020 00:06:09.572991  240589 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W1020 00:06:09.573022  240589 out.go:241] ! Your cgroup does not allow setting memory.
	I1020 00:06:09.575120  240589 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I1020 00:06:09.575228  240589 start_flags.go:748] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 00:06:09.575253  240589 cni.go:93] Creating CNI manager for ""
	I1020 00:06:09.575263  240589 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1020 00:06:09.575276  240589 start_flags.go:280] config:
	{Name:embed-certs-20211020000119-6808 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.27-1633027942-12081@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.2 ClusterName:embed-certs-20211020000119-6808 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.22.2 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host}
	I1020 00:06:09.577194  240589 out.go:176] * Starting control plane node embed-certs-20211020000119-6808 in cluster embed-certs-20211020000119-6808
	I1020 00:06:09.577227  240589 cache.go:118] Beginning downloading kic base image for docker with containerd
	I1020 00:06:09.578744  240589 out.go:176] * Pulling base image ...
	I1020 00:06:09.578776  240589 preload.go:130] Checking if preload exists for k8s version v1.22.2 and runtime containerd
	I1020 00:06:09.578811  240589 preload.go:146] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v13-v1.22.2-containerd-overlay2-amd64.tar.lz4
	I1020 00:06:09.578826  240589 cache.go:57] Caching tarball of preloaded images
	I1020 00:06:09.578876  240589 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.27-1633027942-12081@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1020 00:06:09.579087  240589 preload.go:172] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v13-v1.22.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1020 00:06:09.579105  240589 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.2 on containerd
	I1020 00:06:09.579259  240589 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/embed-certs-20211020000119-6808/config.json ...
	I1020 00:06:09.681469  240589 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.27-1633027942-12081@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1020 00:06:09.681494  240589 cache.go:140] gcr.io/k8s-minikube/kicbase-builds:v0.0.27-1633027942-12081@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1020 00:06:09.681511  240589 cache.go:206] Successfully downloaded all kic artifacts
	I1020 00:06:09.681543  240589 start.go:313] acquiring machines lock for embed-certs-20211020000119-6808: {Name:mk7187c9ae4df6d41aa7c4922b992dfb2ef12393 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 00:06:09.681652  240589 start.go:317] acquired machines lock for "embed-certs-20211020000119-6808" in 64.93µs
	I1020 00:06:09.681678  240589 start.go:93] Skipping create...Using existing machine configuration
	I1020 00:06:09.681682  240589 fix.go:55] fixHost starting: 
	I1020 00:06:09.681925  240589 cli_runner.go:115] Run: docker container inspect embed-certs-20211020000119-6808 --format={{.State.Status}}
	I1020 00:06:09.721280  240589 fix.go:108] recreateIfNeeded on embed-certs-20211020000119-6808: state=Stopped err=<nil>
	W1020 00:06:09.721309  240589 fix.go:134] unexpected machine state, will restart: <nil>
	I1020 00:06:06.010824  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:08.510799  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:07.397812  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:09.398009  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:08.345903  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:10.847006  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:09.723989  240589 out.go:176] * Restarting existing docker container for "embed-certs-20211020000119-6808" ...
	I1020 00:06:09.724048  240589 cli_runner.go:115] Run: docker start embed-certs-20211020000119-6808
	I1020 00:06:11.522123  240589 cli_runner.go:168] Completed: docker start embed-certs-20211020000119-6808: (1.798049644s)
	I1020 00:06:11.522197  240589 cli_runner.go:115] Run: docker container inspect embed-certs-20211020000119-6808 --format={{.State.Status}}
	I1020 00:06:11.565365  240589 kic.go:420] container "embed-certs-20211020000119-6808" state is running.
	I1020 00:06:11.565737  240589 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20211020000119-6808
	I1020 00:06:11.607142  240589 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/embed-certs-20211020000119-6808/config.json ...
	I1020 00:06:11.607366  240589 machine.go:88] provisioning docker machine ...
	I1020 00:06:11.607392  240589 ubuntu.go:169] provisioning hostname "embed-certs-20211020000119-6808"
	I1020 00:06:11.607446  240589 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211020000119-6808
	I1020 00:06:11.648830  240589 main.go:130] libmachine: Using SSH client type: native
	I1020 00:06:11.649014  240589 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0b00] 0x7a3be0 <nil>  [] 0s} 127.0.0.1 33030 <nil> <nil>}
	I1020 00:06:11.649033  240589 main.go:130] libmachine: About to run SSH command:
	sudo hostname embed-certs-20211020000119-6808 && echo "embed-certs-20211020000119-6808" | sudo tee /etc/hostname
	I1020 00:06:11.649483  240589 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58498->127.0.0.1:33030: read: connection reset by peer
	I1020 00:06:11.009995  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:13.010500  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:15.010983  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:11.398393  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:13.898112  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:13.346550  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:15.346980  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:14.763882  240589 main.go:130] libmachine: SSH cmd err, output: <nil>: embed-certs-20211020000119-6808
	
	I1020 00:06:14.763953  240589 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211020000119-6808
	I1020 00:06:14.803082  240589 main.go:130] libmachine: Using SSH client type: native
	I1020 00:06:14.803220  240589 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0b00] 0x7a3be0 <nil>  [] 0s} 127.0.0.1 33030 <nil> <nil>}
	I1020 00:06:14.803242  240589 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20211020000119-6808' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20211020000119-6808/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20211020000119-6808' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1020 00:06:14.915281  240589 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I1020 00:06:14.915314  240589 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube}
	I1020 00:06:14.915333  240589 ubuntu.go:177] setting up certificates
	I1020 00:06:14.915343  240589 provision.go:83] configureAuth start
	I1020 00:06:14.915385  240589 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20211020000119-6808
	I1020 00:06:14.955130  240589 provision.go:138] copyHostCerts
	I1020 00:06:14.955216  240589 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/ca.pem, removing ...
	I1020 00:06:14.955229  240589 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/ca.pem
	I1020 00:06:14.955305  240589 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/ca.pem (1082 bytes)
	I1020 00:06:14.955401  240589 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cert.pem, removing ...
	I1020 00:06:14.955413  240589 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cert.pem
	I1020 00:06:14.955445  240589 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cert.pem (1123 bytes)
	I1020 00:06:14.955503  240589 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/key.pem, removing ...
	I1020 00:06:14.955511  240589 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/key.pem
	I1020 00:06:14.955538  240589 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/key.pem (1679 bytes)
	I1020 00:06:14.955592  240589 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20211020000119-6808 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20211020000119-6808]
	I1020 00:06:15.115907  240589 provision.go:172] copyRemoteCerts
	I1020 00:06:15.115958  240589 ssh_runner.go:152] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1020 00:06:15.115988  240589 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211020000119-6808
	I1020 00:06:15.159903  240589 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33030 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/embed-certs-20211020000119-6808/id_rsa Username:docker}
	I1020 00:06:15.278849  240589 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1020 00:06:15.295278  240589 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I1020 00:06:15.311492  240589 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1020 00:06:15.327812  240589 provision.go:86] duration metric: configureAuth took 412.459086ms
	I1020 00:06:15.327868  240589 ubuntu.go:193] setting minikube options for container-runtime
	I1020 00:06:15.328021  240589 config.go:176] Loaded profile config "embed-certs-20211020000119-6808": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.2
	I1020 00:06:15.328033  240589 machine.go:91] provisioned docker machine in 3.72065274s
	I1020 00:06:15.328040  240589 start.go:267] post-start starting for "embed-certs-20211020000119-6808" (driver="docker")
	I1020 00:06:15.328045  240589 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1020 00:06:15.328080  240589 ssh_runner.go:152] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1020 00:06:15.328123  240589 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211020000119-6808
	I1020 00:06:15.368400  240589 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33030 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/embed-certs-20211020000119-6808/id_rsa Username:docker}
	I1020 00:06:15.446897  240589 ssh_runner.go:152] Run: cat /etc/os-release
	I1020 00:06:15.449422  240589 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1020 00:06:15.449444  240589 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1020 00:06:15.449452  240589 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1020 00:06:15.449457  240589 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I1020 00:06:15.449464  240589 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/addons for local assets ...
	I1020 00:06:15.449507  240589 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/files for local assets ...
	I1020 00:06:15.449570  240589 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/files/etc/ssl/certs/68082.pem -> 68082.pem in /etc/ssl/certs
	I1020 00:06:15.449643  240589 ssh_runner.go:152] Run: sudo mkdir -p /etc/ssl/certs
	I1020 00:06:15.455856  240589 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/files/etc/ssl/certs/68082.pem --> /etc/ssl/certs/68082.pem (1708 bytes)
	I1020 00:06:15.471499  240589 start.go:270] post-start completed in 143.450063ms
	I1020 00:06:15.471550  240589 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1020 00:06:15.471583  240589 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211020000119-6808
	I1020 00:06:15.512337  240589 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33030 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/embed-certs-20211020000119-6808/id_rsa Username:docker}
	I1020 00:06:15.591886  240589 fix.go:57] fixHost completed within 5.910199097s
	I1020 00:06:15.591910  240589 start.go:80] releasing machines lock for "embed-certs-20211020000119-6808", held for 5.910246637s
	I1020 00:06:15.591976  240589 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20211020000119-6808
	I1020 00:06:15.632809  240589 ssh_runner.go:152] Run: systemctl --version
	I1020 00:06:15.632828  240589 ssh_runner.go:152] Run: curl -sS -m 2 https://k8s.gcr.io/
	I1020 00:06:15.632865  240589 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211020000119-6808
	I1020 00:06:15.632872  240589 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211020000119-6808
	I1020 00:06:15.674281  240589 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33030 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/embed-certs-20211020000119-6808/id_rsa Username:docker}
	I1020 00:06:15.676062  240589 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33030 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/embed-certs-20211020000119-6808/id_rsa Username:docker}
	I1020 00:06:15.771657  240589 ssh_runner.go:152] Run: sudo systemctl stop -f crio
	I1020 00:06:15.781776  240589 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service crio
	I1020 00:06:15.790100  240589 docker.go:156] disabling docker service ...
	I1020 00:06:15.790144  240589 ssh_runner.go:152] Run: sudo systemctl stop -f docker.socket
	I1020 00:06:15.804968  240589 ssh_runner.go:152] Run: sudo systemctl stop -f docker.service
	I1020 00:06:15.813315  240589 ssh_runner.go:152] Run: sudo systemctl disable docker.socket
	I1020 00:06:15.879518  240589 ssh_runner.go:152] Run: sudo systemctl mask docker.service
	I1020 00:06:15.938852  240589 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service docker
	I1020 00:06:15.947547  240589 ssh_runner.go:152] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1020 00:06:15.959658  240589 ssh_runner.go:152] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwcm94eV9wbHVnaW5zXQojIGZ1c2Utb3ZlcmxheWZzIGlzIHVzZWQgZm9yIHJvb3RsZXNzCltwcm94eV9wbHVnaW5zLiJmdXNlLW92ZXJsYXlmcyJdCiAgdHlwZSA9ICJzbmFwc2hvdCIKICBhZGRyZXNzID0gIi9ydW4vY29udGFpbmVyZC1mdXNlLW92ZXJsYXlmcy5zb2NrIgoKW3BsdWdpbnNdCiAgW3BsdWdpbnMuY2dyb3Vwc10KICAgIG5vX3Byb21ldGhldXMgPSBmYWxzZQogIFtwbHVnaW5zLmNyaV0KICAgIHN0cmVhbV9zZXJ2ZXJfYWRkcmVzcyA9ICIiCiAgICBzdHJlYW1fc2VydmVyX3BvcnQgPSAiMTAwMTAiCiAgICBlbmFibGVfc2VsaW51eCA9IGZhbHNlCiAgICBzY
W5kYm94X2ltYWdlID0gIms4cy5nY3IuaW8vcGF1c2U6My41IgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKICAgIHJlc3RyaWN0X29vbV9zY29yZV9hZGogPSBmYWxzZQoKCVtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0KICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmRdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXNdCiAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jXQogICAgICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdCiAgICAgICAgICAgICAgU3lzdGVtZENncm91cCA9IGZhbHNlCgogICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmRdCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLnVud
HJ1c3RlZF93b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICBbcGx1Z2lucy5jcmkuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0Lm1rIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuZGlmZi1zZXJ2aWNlXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuc2NoZWR1bGVyXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I1020 00:06:15.971992  240589 ssh_runner.go:152] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1020 00:06:15.977723  240589 crio.go:137] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1020 00:06:15.977768  240589 ssh_runner.go:152] Run: sudo modprobe br_netfilter
	I1020 00:06:15.984350  240589 ssh_runner.go:152] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1020 00:06:15.990350  240589 ssh_runner.go:152] Run: sudo systemctl daemon-reload
	I1020 00:06:16.050094  240589 ssh_runner.go:152] Run: sudo systemctl restart containerd
	I1020 00:06:16.116825  240589 start.go:393] Will wait 60s for socket path /run/containerd/containerd.sock
	I1020 00:06:16.116897  240589 ssh_runner.go:152] Run: stat /run/containerd/containerd.sock
	I1020 00:06:16.120659  240589 start.go:414] Will wait 60s for crictl version
	I1020 00:06:16.120705  240589 ssh_runner.go:152] Run: sudo crictl version
	I1020 00:06:16.144288  240589 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-10-20T00:06:16Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I1020 00:06:17.510240  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:19.510451  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:16.397770  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:18.397987  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:20.897395  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:17.845906  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:19.847301  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:22.009986  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:24.010345  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:22.897480  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:25.397498  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:22.345565  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:24.345988  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:27.192504  240589 ssh_runner.go:152] Run: sudo crictl version
	I1020 00:06:27.224058  240589 start.go:423] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.9
	RuntimeApiVersion:  v1alpha2
	I1020 00:06:27.224125  240589 ssh_runner.go:152] Run: containerd --version
	I1020 00:06:27.247660  240589 ssh_runner.go:152] Run: containerd --version
	I1020 00:06:27.271629  240589 out.go:176] * Preparing Kubernetes v1.22.2 on containerd 1.4.9 ...
	I1020 00:06:27.271708  240589 cli_runner.go:115] Run: docker network inspect embed-certs-20211020000119-6808 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1020 00:06:27.310346  240589 ssh_runner.go:152] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1020 00:06:27.313566  240589 ssh_runner.go:152] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 00:06:27.322677  240589 preload.go:130] Checking if preload exists for k8s version v1.22.2 and runtime containerd
	I1020 00:06:27.322729  240589 ssh_runner.go:152] Run: sudo crictl images --output json
	I1020 00:06:27.346327  240589 containerd.go:635] all images are preloaded for containerd runtime.
	I1020 00:06:27.346346  240589 containerd.go:539] Images already preloaded, skipping extraction
	I1020 00:06:27.346385  240589 ssh_runner.go:152] Run: sudo crictl images --output json
	I1020 00:06:27.368362  240589 containerd.go:635] all images are preloaded for containerd runtime.
	I1020 00:06:27.368384  240589 cache_images.go:79] Images are preloaded, skipping loading
	I1020 00:06:27.368436  240589 ssh_runner.go:152] Run: sudo crictl info
	I1020 00:06:27.391015  240589 cni.go:93] Creating CNI manager for ""
	I1020 00:06:27.391040  240589 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1020 00:06:27.391054  240589 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1020 00:06:27.391069  240589 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.22.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20211020000119-6808 NodeName:embed-certs-20211020000119-6808 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientC
AFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I1020 00:06:27.391228  240589 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "embed-certs-20211020000119-6808"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.22.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1020 00:06:27.391352  240589 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.22.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=embed-certs-20211020000119-6808 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.22.2 ClusterName:embed-certs-20211020000119-6808 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1020 00:06:27.391412  240589 ssh_runner.go:152] Run: sudo ls /var/lib/minikube/binaries/v1.22.2
	I1020 00:06:27.398607  240589 binaries.go:44] Found k8s binaries, skipping transfer
	I1020 00:06:27.398666  240589 ssh_runner.go:152] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1020 00:06:27.405263  240589 ssh_runner.go:319] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (576 bytes)
	I1020 00:06:27.417352  240589 ssh_runner.go:319] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1020 00:06:27.429311  240589 ssh_runner.go:319] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2081 bytes)
	I1020 00:06:27.441088  240589 ssh_runner.go:152] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1020 00:06:27.443741  240589 ssh_runner.go:152] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 00:06:27.451934  240589 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/embed-certs-20211020000119-6808 for IP: 192.168.58.2
	I1020 00:06:27.452034  240589 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/ca.key
	I1020 00:06:27.452066  240589 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/proxy-client-ca.key
	I1020 00:06:27.452131  240589 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/embed-certs-20211020000119-6808/client.key
	I1020 00:06:27.452179  240589 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/embed-certs-20211020000119-6808/apiserver.key.cee25041
	I1020 00:06:27.452210  240589 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/embed-certs-20211020000119-6808/proxy-client.key
	I1020 00:06:27.452315  240589 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/6808.pem (1338 bytes)
	W1020 00:06:27.452342  240589 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/6808_empty.pem, impossibly tiny 0 bytes
	I1020 00:06:27.452352  240589 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/ca-key.pem (1679 bytes)
	I1020 00:06:27.452378  240589 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/ca.pem (1082 bytes)
	I1020 00:06:27.452400  240589 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/cert.pem (1123 bytes)
	I1020 00:06:27.452420  240589 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/key.pem (1679 bytes)
	I1020 00:06:27.452470  240589 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/files/etc/ssl/certs/68082.pem (1708 bytes)
	I1020 00:06:27.453404  240589 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/embed-certs-20211020000119-6808/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1020 00:06:27.468991  240589 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/embed-certs-20211020000119-6808/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1020 00:06:27.484431  240589 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/embed-certs-20211020000119-6808/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1020 00:06:27.500560  240589 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/embed-certs-20211020000119-6808/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1020 00:06:27.516625  240589 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1020 00:06:27.532300  240589 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1020 00:06:27.547994  240589 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1020 00:06:27.563524  240589 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1020 00:06:27.578941  240589 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/6808.pem --> /usr/share/ca-certificates/6808.pem (1338 bytes)
	I1020 00:06:27.595429  240589 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/files/etc/ssl/certs/68082.pem --> /usr/share/ca-certificates/68082.pem (1708 bytes)
	I1020 00:06:27.610865  240589 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1020 00:06:27.626160  240589 ssh_runner.go:319] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1020 00:06:27.637664  240589 ssh_runner.go:152] Run: openssl version
	I1020 00:06:27.642063  240589 ssh_runner.go:152] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6808.pem && ln -fs /usr/share/ca-certificates/6808.pem /etc/ssl/certs/6808.pem"
	I1020 00:06:27.648693  240589 ssh_runner.go:152] Run: ls -la /usr/share/ca-certificates/6808.pem
	I1020 00:06:27.651418  240589 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Oct 19 23:29 /usr/share/ca-certificates/6808.pem
	I1020 00:06:27.651450  240589 ssh_runner.go:152] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6808.pem
	I1020 00:06:27.656039  240589 ssh_runner.go:152] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6808.pem /etc/ssl/certs/51391683.0"
	I1020 00:06:27.662349  240589 ssh_runner.go:152] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/68082.pem && ln -fs /usr/share/ca-certificates/68082.pem /etc/ssl/certs/68082.pem"
	I1020 00:06:27.669076  240589 ssh_runner.go:152] Run: ls -la /usr/share/ca-certificates/68082.pem
	I1020 00:06:27.671878  240589 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Oct 19 23:29 /usr/share/ca-certificates/68082.pem
	I1020 00:06:27.671919  240589 ssh_runner.go:152] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/68082.pem
	I1020 00:06:27.676445  240589 ssh_runner.go:152] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/68082.pem /etc/ssl/certs/3ec20f2e.0"
	I1020 00:06:27.682789  240589 ssh_runner.go:152] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1020 00:06:27.689585  240589 ssh_runner.go:152] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1020 00:06:27.692386  240589 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Oct 19 23:16 /usr/share/ca-certificates/minikubeCA.pem
	I1020 00:06:27.692421  240589 ssh_runner.go:152] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1020 00:06:27.696850  240589 ssh_runner.go:152] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1020 00:06:27.703109  240589 kubeadm.go:390] StartCluster: {Name:embed-certs-20211020000119-6808 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.27-1633027942-12081@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.2 ClusterName:embed-certs-20211020000119-6808 Namespace:default APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.22.2 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil>
ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host}
	I1020 00:06:27.703191  240589 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1020 00:06:27.703226  240589 ssh_runner.go:152] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 00:06:27.725958  240589 cri.go:76] found id: "c1fb767a7419320ecb83a210aa9861ac71bc7880706f49a328e45aad86f82cb6"
	I1020 00:06:27.725977  240589 cri.go:76] found id: "a52ef32275c1f6e0510fa1814a42ff042726906d6c0315c4f19176464722fcf1"
	I1020 00:06:27.725984  240589 cri.go:76] found id: "f23c623e9c9c25357fc2d88529567ef060d92c02c01bb49b286eedee51fa39b9"
	I1020 00:06:27.725991  240589 cri.go:76] found id: "c95619e5276b0546901bf50a62103d60586744738e7fb9e1e7611a5036e0ec15"
	I1020 00:06:27.725998  240589 cri.go:76] found id: "dba9471d048c2830b015bc20b30ad7ec0be98e26a6dae0072eec92115eccf156"
	I1020 00:06:27.726006  240589 cri.go:76] found id: "07797be42368cdb06f549720b43b9a7279fe9f31c163f38f3035e4b790007e1a"
	I1020 00:06:27.726015  240589 cri.go:76] found id: "6b9b04f69dd7091480ce72178a541e0be6e229f9419c8ae5fc4422462f2f196f"
	I1020 00:06:27.726025  240589 cri.go:76] found id: "cde54ed35a44493e20c78309b480f17b39e66c2fb0b5ea8c84aa2975d5a493e8"
	I1020 00:06:27.726036  240589 cri.go:76] found id: ""
	I1020 00:06:27.726080  240589 ssh_runner.go:152] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1020 00:06:27.739226  240589 cri.go:103] JSON = null
	W1020 00:06:27.739271  240589 kubeadm.go:397] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I1020 00:06:27.739317  240589 ssh_runner.go:152] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1020 00:06:27.745997  240589 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I1020 00:06:27.746016  240589 kubeadm.go:600] restartCluster start
	I1020 00:06:27.746068  240589 ssh_runner.go:152] Run: sudo test -d /data/minikube
	I1020 00:06:27.751933  240589 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1020 00:06:27.752721  240589 kubeconfig.go:116] verify returned: extract IP: "embed-certs-20211020000119-6808" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/kubeconfig
	I1020 00:06:27.753031  240589 kubeconfig.go:127] "embed-certs-20211020000119-6808" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/kubeconfig - will repair!
	I1020 00:06:27.753586  240589 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/kubeconfig: {Name:mk397e230cae60dd4f11434c0167f3868cd3be72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 00:06:27.755655  240589 ssh_runner.go:152] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1020 00:06:27.761574  240589 api_server.go:165] Checking apiserver status ...
	I1020 00:06:27.761613  240589 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1020 00:06:27.773653  240589 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1020 00:06:27.974031  240589 api_server.go:165] Checking apiserver status ...
	I1020 00:06:27.974102  240589 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1020 00:06:27.987733  240589 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1020 00:06:28.173901  240589 api_server.go:165] Checking apiserver status ...
	I1020 00:06:28.173986  240589 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1020 00:06:28.187297  240589 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1020 00:06:28.374530  240589 api_server.go:165] Checking apiserver status ...
	I1020 00:06:28.374613  240589 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1020 00:06:28.387936  240589 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1020 00:06:28.574199  240589 api_server.go:165] Checking apiserver status ...
	I1020 00:06:28.574263  240589 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1020 00:06:28.587924  240589 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1020 00:06:28.774223  240589 api_server.go:165] Checking apiserver status ...
	I1020 00:06:28.774299  240589 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1020 00:06:28.787608  240589 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1020 00:06:28.973839  240589 api_server.go:165] Checking apiserver status ...
	I1020 00:06:28.973913  240589 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1020 00:06:28.987356  240589 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1020 00:06:29.174592  240589 api_server.go:165] Checking apiserver status ...
	I1020 00:06:29.174668  240589 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1020 00:06:29.187874  240589 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1020 00:06:26.010454  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:28.509990  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:27.397929  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:29.897706  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:26.846227  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:29.345819  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:29.374417  240589 api_server.go:165] Checking apiserver status ...
	I1020 00:06:29.374488  240589 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1020 00:06:29.388055  240589 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1020 00:06:29.574325  240589 api_server.go:165] Checking apiserver status ...
	I1020 00:06:29.574395  240589 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1020 00:06:29.588045  240589 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1020 00:06:29.774251  240589 api_server.go:165] Checking apiserver status ...
	I1020 00:06:29.774327  240589 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1020 00:06:29.787522  240589 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1020 00:06:29.974759  240589 api_server.go:165] Checking apiserver status ...
	I1020 00:06:29.974850  240589 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1020 00:06:29.988493  240589 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1020 00:06:30.173716  240589 api_server.go:165] Checking apiserver status ...
	I1020 00:06:30.173794  240589 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1020 00:06:30.187149  240589 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1020 00:06:30.374432  240589 api_server.go:165] Checking apiserver status ...
	I1020 00:06:30.374503  240589 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1020 00:06:30.387952  240589 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1020 00:06:30.574194  240589 api_server.go:165] Checking apiserver status ...
	I1020 00:06:30.574264  240589 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1020 00:06:30.588219  240589 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1020 00:06:30.774487  240589 api_server.go:165] Checking apiserver status ...
	I1020 00:06:30.774567  240589 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1020 00:06:30.787750  240589 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1020 00:06:30.787776  240589 api_server.go:165] Checking apiserver status ...
	I1020 00:06:30.787828  240589 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1020 00:06:30.799489  240589 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1020 00:06:30.799509  240589 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I1020 00:06:30.799516  240589 kubeadm.go:1032] stopping kube-system containers ...
	I1020 00:06:30.799525  240589 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1020 00:06:30.799561  240589 ssh_runner.go:152] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 00:06:30.834440  240589 cri.go:76] found id: "c1fb767a7419320ecb83a210aa9861ac71bc7880706f49a328e45aad86f82cb6"
	I1020 00:06:30.834463  240589 cri.go:76] found id: "a52ef32275c1f6e0510fa1814a42ff042726906d6c0315c4f19176464722fcf1"
	I1020 00:06:30.834468  240589 cri.go:76] found id: "f23c623e9c9c25357fc2d88529567ef060d92c02c01bb49b286eedee51fa39b9"
	I1020 00:06:30.834473  240589 cri.go:76] found id: "c95619e5276b0546901bf50a62103d60586744738e7fb9e1e7611a5036e0ec15"
	I1020 00:06:30.834477  240589 cri.go:76] found id: "dba9471d048c2830b015bc20b30ad7ec0be98e26a6dae0072eec92115eccf156"
	I1020 00:06:30.834482  240589 cri.go:76] found id: "07797be42368cdb06f549720b43b9a7279fe9f31c163f38f3035e4b790007e1a"
	I1020 00:06:30.834487  240589 cri.go:76] found id: "6b9b04f69dd7091480ce72178a541e0be6e229f9419c8ae5fc4422462f2f196f"
	I1020 00:06:30.834491  240589 cri.go:76] found id: "cde54ed35a44493e20c78309b480f17b39e66c2fb0b5ea8c84aa2975d5a493e8"
	I1020 00:06:30.834495  240589 cri.go:76] found id: ""
	I1020 00:06:30.834500  240589 cri.go:220] Stopping containers: [c1fb767a7419320ecb83a210aa9861ac71bc7880706f49a328e45aad86f82cb6 a52ef32275c1f6e0510fa1814a42ff042726906d6c0315c4f19176464722fcf1 f23c623e9c9c25357fc2d88529567ef060d92c02c01bb49b286eedee51fa39b9 c95619e5276b0546901bf50a62103d60586744738e7fb9e1e7611a5036e0ec15 dba9471d048c2830b015bc20b30ad7ec0be98e26a6dae0072eec92115eccf156 07797be42368cdb06f549720b43b9a7279fe9f31c163f38f3035e4b790007e1a 6b9b04f69dd7091480ce72178a541e0be6e229f9419c8ae5fc4422462f2f196f cde54ed35a44493e20c78309b480f17b39e66c2fb0b5ea8c84aa2975d5a493e8]
	I1020 00:06:30.834539  240589 ssh_runner.go:152] Run: which crictl
	I1020 00:06:30.837524  240589 ssh_runner.go:152] Run: sudo /usr/bin/crictl stop c1fb767a7419320ecb83a210aa9861ac71bc7880706f49a328e45aad86f82cb6 a52ef32275c1f6e0510fa1814a42ff042726906d6c0315c4f19176464722fcf1 f23c623e9c9c25357fc2d88529567ef060d92c02c01bb49b286eedee51fa39b9 c95619e5276b0546901bf50a62103d60586744738e7fb9e1e7611a5036e0ec15 dba9471d048c2830b015bc20b30ad7ec0be98e26a6dae0072eec92115eccf156 07797be42368cdb06f549720b43b9a7279fe9f31c163f38f3035e4b790007e1a 6b9b04f69dd7091480ce72178a541e0be6e229f9419c8ae5fc4422462f2f196f cde54ed35a44493e20c78309b480f17b39e66c2fb0b5ea8c84aa2975d5a493e8
	I1020 00:06:30.861801  240589 ssh_runner.go:152] Run: sudo systemctl stop kubelet
	I1020 00:06:30.871303  240589 ssh_runner.go:152] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1020 00:06:30.877932  240589 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5639 Oct 20 00:04 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Oct 20 00:04 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 Oct 20 00:05 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Oct 20 00:04 /etc/kubernetes/scheduler.conf
	
	I1020 00:06:30.877975  240589 ssh_runner.go:152] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1020 00:06:30.884278  240589 ssh_runner.go:152] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1020 00:06:30.890497  240589 ssh_runner.go:152] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1020 00:06:30.896978  240589 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1020 00:06:30.897024  240589 ssh_runner.go:152] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1020 00:06:30.902937  240589 ssh_runner.go:152] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1020 00:06:30.909136  240589 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1020 00:06:30.909170  240589 ssh_runner.go:152] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1020 00:06:30.915073  240589 ssh_runner.go:152] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1020 00:06:30.921165  240589 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1020 00:06:30.921183  240589 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1020 00:06:30.966403  240589 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1020 00:06:31.966559  240589 ssh_runner.go:192] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.000129219s)
	I1020 00:06:31.966587  240589 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1020 00:06:32.090848  240589 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1020 00:06:32.145647  240589 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1020 00:06:32.225526  240589 api_server.go:51] waiting for apiserver process to appear ...
	I1020 00:06:32.225581  240589 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 00:06:32.739171  240589 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 00:06:33.239115  240589 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 00:06:33.739297  240589 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 00:06:34.239356  240589 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 00:06:30.510423  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:33.010933  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:31.897825  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:34.397777  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:31.345905  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:33.846052  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:34.738792  240589 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 00:06:35.238858  240589 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 00:06:35.739612  240589 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 00:06:36.238993  240589 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 00:06:36.738977  240589 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 00:06:37.239588  240589 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 00:06:37.739297  240589 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 00:06:38.239516  240589 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 00:06:38.738960  240589 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 00:06:38.756280  240589 api_server.go:71] duration metric: took 6.530753166s to wait for apiserver process to appear ...
	I1020 00:06:38.756305  240589 api_server.go:87] waiting for apiserver healthz status ...
	I1020 00:06:38.756316  240589 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1020 00:06:35.509720  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:37.510415  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:39.510776  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:36.897685  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:39.397842  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:36.346821  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:38.349512  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:40.846565  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:42.244630  240589 api_server.go:266] https://192.168.58.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1020 00:06:42.244657  240589 api_server.go:102] status: https://192.168.58.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1020 00:06:42.744872  240589 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1020 00:06:42.749162  240589 api_server.go:266] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1020 00:06:42.749187  240589 api_server.go:102] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1020 00:06:43.245750  240589 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1020 00:06:43.250901  240589 api_server.go:266] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1020 00:06:43.250932  240589 api_server.go:102] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1020 00:06:43.745156  240589 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1020 00:06:43.749488  240589 api_server.go:266] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1020 00:06:43.755054  240589 api_server.go:140] control plane version: v1.22.2
	I1020 00:06:43.755076  240589 api_server.go:130] duration metric: took 4.998765151s to wait for apiserver health ...
	I1020 00:06:43.755087  240589 cni.go:93] Creating CNI manager for ""
	I1020 00:06:43.755094  240589 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1020 00:06:43.757314  240589 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I1020 00:06:43.757377  240589 ssh_runner.go:152] Run: stat /opt/cni/bin/portmap
	I1020 00:06:43.761031  240589 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.22.2/kubectl ...
	I1020 00:06:43.761051  240589 ssh_runner.go:319] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I1020 00:06:43.773672  240589 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1020 00:06:43.919885  240589 system_pods.go:43] waiting for kube-system pods to appear ...
	I1020 00:06:43.931132  240589 system_pods.go:59] 9 kube-system pods found
	I1020 00:06:43.931161  240589 system_pods.go:61] "coredns-78fcd69978-fg4q9" [f034ef5b-a0da-4301-9a1f-44df829113d4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 00:06:43.931169  240589 system_pods.go:61] "etcd-embed-certs-20211020000119-6808" [8eb4c69b-031a-444a-a28c-a8f281925a8e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1020 00:06:43.931174  240589 system_pods.go:61] "kindnet-5p66p" [d20601c0-5621-4a57-90ab-643005978f0c] Running
	I1020 00:06:43.931178  240589 system_pods.go:61] "kube-apiserver-embed-certs-20211020000119-6808" [824e320d-c60f-4fd8-867e-bcec1f4bb811] Running
	I1020 00:06:43.931182  240589 system_pods.go:61] "kube-controller-manager-embed-certs-20211020000119-6808" [3eeae4c1-1a77-4c22-8f42-f7c9e0cf69f4] Running
	I1020 00:06:43.931186  240589 system_pods.go:61] "kube-proxy-6kjxd" [08322047-768f-4cab-ba86-f4b9086a3349] Running
	I1020 00:06:43.931191  240589 system_pods.go:61] "kube-scheduler-embed-certs-20211020000119-6808" [f621e964-08b6-486f-b7e2-aacf75e911af] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1020 00:06:43.931196  240589 system_pods.go:61] "metrics-server-7c784ccb57-2df4s" [a75689b5-73cd-4993-a4ff-de7005c47eb0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 00:06:43.931200  240589 system_pods.go:61] "storage-provisioner" [ae570658-56f1-42cc-927b-390d4b9e7fe7] Running
	I1020 00:06:43.931205  240589 system_pods.go:74] duration metric: took 11.297918ms to wait for pod list to return data ...
	I1020 00:06:43.931211  240589 node_conditions.go:102] verifying NodePressure condition ...
	I1020 00:06:43.934362  240589 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I1020 00:06:43.934385  240589 node_conditions.go:123] node cpu capacity is 8
	I1020 00:06:43.934394  240589 node_conditions.go:105] duration metric: took 3.179694ms to run NodePressure ...
	I1020 00:06:43.934408  240589 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1020 00:06:44.086190  240589 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I1020 00:06:44.089799  240589 kubeadm.go:746] kubelet initialised
	I1020 00:06:44.089816  240589 kubeadm.go:747] duration metric: took 3.605637ms waiting for restarted kubelet to initialise ...
	I1020 00:06:44.089823  240589 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1020 00:06:44.094220  240589 pod_ready.go:78] waiting up to 4m0s for pod "coredns-78fcd69978-fg4q9" in "kube-system" namespace to be "Ready" ...
	I1020 00:06:42.010385  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:44.010846  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:41.897843  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:43.898084  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:45.898550  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:43.346797  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:45.847238  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:46.107960  240589 pod_ready.go:102] pod "coredns-78fcd69978-fg4q9" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:48.607158  240589 pod_ready.go:102] pod "coredns-78fcd69978-fg4q9" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:46.509986  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:48.510080  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:48.398002  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:50.398483  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:48.346824  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:50.846281  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:50.107914  240589 pod_ready.go:92] pod "coredns-78fcd69978-fg4q9" in "kube-system" namespace has status "Ready":"True"
	I1020 00:06:50.107938  240589 pod_ready.go:81] duration metric: took 6.013692429s waiting for pod "coredns-78fcd69978-fg4q9" in "kube-system" namespace to be "Ready" ...
	I1020 00:06:50.107950  240589 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-20211020000119-6808" in "kube-system" namespace to be "Ready" ...
	I1020 00:06:51.118058  240589 pod_ready.go:92] pod "etcd-embed-certs-20211020000119-6808" in "kube-system" namespace has status "Ready":"True"
	I1020 00:06:51.118083  240589 pod_ready.go:81] duration metric: took 1.010125287s waiting for pod "etcd-embed-certs-20211020000119-6808" in "kube-system" namespace to be "Ready" ...
	I1020 00:06:51.118100  240589 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-20211020000119-6808" in "kube-system" namespace to be "Ready" ...
	I1020 00:06:52.627218  240589 pod_ready.go:92] pod "kube-apiserver-embed-certs-20211020000119-6808" in "kube-system" namespace has status "Ready":"True"
	I1020 00:06:52.627242  240589 pod_ready.go:81] duration metric: took 1.50913362s waiting for pod "kube-apiserver-embed-certs-20211020000119-6808" in "kube-system" namespace to be "Ready" ...
	I1020 00:06:52.627253  240589 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-20211020000119-6808" in "kube-system" namespace to be "Ready" ...
	I1020 00:06:52.632785  240589 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20211020000119-6808" in "kube-system" namespace has status "Ready":"True"
	I1020 00:06:52.632836  240589 pod_ready.go:81] duration metric: took 5.575451ms waiting for pod "kube-controller-manager-embed-certs-20211020000119-6808" in "kube-system" namespace to be "Ready" ...
	I1020 00:06:52.632864  240589 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6kjxd" in "kube-system" namespace to be "Ready" ...
	I1020 00:06:52.637065  240589 pod_ready.go:92] pod "kube-proxy-6kjxd" in "kube-system" namespace has status "Ready":"True"
	I1020 00:06:52.637081  240589 pod_ready.go:81] duration metric: took 4.198274ms waiting for pod "kube-proxy-6kjxd" in "kube-system" namespace to be "Ready" ...
	I1020 00:06:52.637089  240589 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-20211020000119-6808" in "kube-system" namespace to be "Ready" ...
	I1020 00:06:53.646143  240589 pod_ready.go:92] pod "kube-scheduler-embed-certs-20211020000119-6808" in "kube-system" namespace has status "Ready":"True"
	I1020 00:06:53.646167  240589 pod_ready.go:81] duration metric: took 1.009072799s waiting for pod "kube-scheduler-embed-certs-20211020000119-6808" in "kube-system" namespace to be "Ready" ...
	I1020 00:06:53.646176  240589 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace to be "Ready" ...
	I1020 00:06:51.011173  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:53.510555  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:52.898512  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:55.398157  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:53.346739  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:55.347046  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:55.711508  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:57.712310  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:56.010028  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:58.010437  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:57.898010  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:00.397302  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:57.846483  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:00.345776  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:59.712758  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:02.211666  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:00.510404  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:03.010275  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:05.010790  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:02.398696  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:04.897626  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:02.345881  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:04.346322  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:04.710829  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:06.711303  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:08.711429  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:07.509992  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:09.510323  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:06.897686  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:09.397668  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:06.346456  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:08.845864  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:10.711549  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:13.211454  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:12.010234  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:14.509664  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:11.897468  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:13.897593  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:15.898039  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:11.346281  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:13.346656  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:15.846868  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:15.212302  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:17.710951  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:16.510959  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:19.009950  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:18.398003  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:20.398246  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:18.346750  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:20.845766  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:19.711340  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:21.711558  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:24.210455  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:21.510420  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:24.011118  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:22.897752  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:24.393657  217964 pod_ready.go:81] duration metric: took 4m0.383160461s waiting for pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace to be "Ready" ...
	E1020 00:07:24.393680  217964 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace to be "Ready" (will not retry!)
	I1020 00:07:24.393717  217964 pod_ready.go:38] duration metric: took 4m13.926189528s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1020 00:07:24.393759  217964 kubeadm.go:604] restartCluster took 4m30.223477483s
	W1020 00:07:24.393900  217964 out.go:241] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1020 00:07:24.393937  217964 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.3-rc.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1020 00:07:22.845867  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:24.847139  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:26.211224  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:28.211665  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:26.510388  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:29.009786  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:27.703411  217964 ssh_runner.go:192] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.3-rc.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (3.309447788s)
	I1020 00:07:27.703482  217964 ssh_runner.go:152] Run: sudo systemctl stop -f kubelet
	I1020 00:07:27.714110  217964 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1020 00:07:27.714196  217964 ssh_runner.go:152] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 00:07:27.737171  217964 cri.go:76] found id: ""
	I1020 00:07:27.737232  217964 ssh_runner.go:152] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1020 00:07:27.744890  217964 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I1020 00:07:27.744945  217964 ssh_runner.go:152] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1020 00:07:27.751572  217964 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1020 00:07:27.751617  217964 ssh_runner.go:243] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.3-rc.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1020 00:07:27.347106  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:29.846020  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:30.711728  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:33.211464  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:31.010733  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:33.510706  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:31.846057  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:33.846597  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:35.846679  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:35.213797  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:37.711700  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:36.010508  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:38.509836  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:37.846785  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:39.847032  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:42.144522  217964 out.go:203]   - Generating certificates and keys ...
	I1020 00:07:42.147308  217964 out.go:203]   - Booting up control plane ...
	I1020 00:07:42.149841  217964 out.go:203]   - Configuring RBAC rules ...
	I1020 00:07:42.151880  217964 cni.go:93] Creating CNI manager for ""
	I1020 00:07:42.151896  217964 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1020 00:07:40.210971  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:42.211584  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:40.510537  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:42.511178  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:45.011088  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:42.153553  217964 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I1020 00:07:42.153608  217964 ssh_runner.go:152] Run: stat /opt/cni/bin/portmap
	I1020 00:07:42.157249  217964 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl ...
	I1020 00:07:42.157274  217964 ssh_runner.go:319] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I1020 00:07:42.170228  217964 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1020 00:07:42.330213  217964 ssh_runner.go:152] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1020 00:07:42.330271  217964 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:07:42.330302  217964 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl label nodes minikube.k8s.io/version=v1.23.2 minikube.k8s.io/commit=4e20255d8fb01d11f0b16be976bdc5b5661029f6 minikube.k8s.io/name=no-preload-20211020000033-6808 minikube.k8s.io/updated_at=2021_10_20T00_07_42_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:07:42.347371  217964 ops.go:34] apiserver oom_adj: -16
	I1020 00:07:42.438482  217964 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:07:42.991553  217964 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:07:43.491084  217964 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:07:43.991660  217964 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:07:44.491106  217964 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:07:44.991625  217964 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:07:45.491988  217964 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:07:45.991775  217964 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:07:42.346740  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:44.845580  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:44.711270  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:46.711620  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:48.711772  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:47.011459  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:49.510825  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:46.491968  217964 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:07:46.991135  217964 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:07:47.491341  217964 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:07:47.991758  217964 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:07:48.491464  217964 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:07:48.991469  217964 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:07:49.491286  217964 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:07:49.991003  217964 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:07:50.491666  217964 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:07:50.991660  217964 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:07:46.846761  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:49.346661  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:51.211576  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:53.711189  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:51.510943  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:53.511349  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:51.491183  217964 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:07:51.991111  217964 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:07:52.491394  217964 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:07:52.991231  217964 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:07:53.491033  217964 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:07:53.991015  217964 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:07:54.491192  217964 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:07:54.991572  217964 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:07:55.217289  217964 kubeadm.go:985] duration metric: took 12.887077795s to wait for elevateKubeSystemPrivileges.
	I1020 00:07:55.217327  217964 kubeadm.go:392] StartCluster complete in 5m1.088683309s
	I1020 00:07:55.217350  217964 settings.go:142] acquiring lock: {Name:mkae5b2371003bb83973886295163ea24b8b77a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 00:07:55.217439  217964 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/kubeconfig
	I1020 00:07:55.219216  217964 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/kubeconfig: {Name:mk397e230cae60dd4f11434c0167f3868cd3be72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 00:07:55.735530  217964 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20211020000033-6808" rescaled to 1
	I1020 00:07:55.735591  217964 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.22.3-rc.0 ControlPlane:true Worker:true}
	I1020 00:07:55.738246  217964 out.go:176] * Verifying Kubernetes components...
	I1020 00:07:55.735637  217964 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1020 00:07:55.735651  217964 addons.go:404] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I1020 00:07:55.738434  217964 addons.go:65] Setting storage-provisioner=true in profile "no-preload-20211020000033-6808"
	I1020 00:07:55.738455  217964 addons.go:153] Setting addon storage-provisioner=true in "no-preload-20211020000033-6808"
	W1020 00:07:55.738466  217964 addons.go:165] addon storage-provisioner should already be in state true
	I1020 00:07:55.738478  217964 addons.go:65] Setting dashboard=true in profile "no-preload-20211020000033-6808"
	I1020 00:07:55.738492  217964 addons.go:65] Setting metrics-server=true in profile "no-preload-20211020000033-6808"
	I1020 00:07:55.738500  217964 host.go:66] Checking if "no-preload-20211020000033-6808" exists ...
	I1020 00:07:55.738501  217964 addons.go:153] Setting addon dashboard=true in "no-preload-20211020000033-6808"
	W1020 00:07:55.738513  217964 addons.go:165] addon dashboard should already be in state true
	I1020 00:07:55.738519  217964 addons.go:153] Setting addon metrics-server=true in "no-preload-20211020000033-6808"
	W1020 00:07:55.738530  217964 addons.go:165] addon metrics-server should already be in state true
	I1020 00:07:55.738544  217964 host.go:66] Checking if "no-preload-20211020000033-6808" exists ...
	I1020 00:07:55.738558  217964 host.go:66] Checking if "no-preload-20211020000033-6808" exists ...
	I1020 00:07:55.738479  217964 addons.go:65] Setting default-storageclass=true in profile "no-preload-20211020000033-6808"
	I1020 00:07:55.738613  217964 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20211020000033-6808"
	I1020 00:07:55.738308  217964 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
	I1020 00:07:55.735871  217964 config.go:176] Loaded profile config "no-preload-20211020000033-6808": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.3-rc.0
	I1020 00:07:55.738917  217964 cli_runner.go:115] Run: docker container inspect no-preload-20211020000033-6808 --format={{.State.Status}}
	I1020 00:07:55.739004  217964 cli_runner.go:115] Run: docker container inspect no-preload-20211020000033-6808 --format={{.State.Status}}
	I1020 00:07:55.739010  217964 cli_runner.go:115] Run: docker container inspect no-preload-20211020000033-6808 --format={{.State.Status}}
	I1020 00:07:55.739009  217964 cli_runner.go:115] Run: docker container inspect no-preload-20211020000033-6808 --format={{.State.Status}}
	I1020 00:07:55.808883  217964 out.go:176]   - Using image kubernetesui/dashboard:v2.3.1
	I1020 00:07:55.810465  217964 out.go:176]   - Using image k8s.gcr.io/echoserver:1.4
	I1020 00:07:55.810520  217964 addons.go:337] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1020 00:07:55.810531  217964 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1020 00:07:55.810582  217964 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211020000033-6808
	I1020 00:07:55.809042  217964 addons.go:153] Setting addon default-storageclass=true in "no-preload-20211020000033-6808"
	W1020 00:07:55.810793  217964 addons.go:165] addon default-storageclass should already be in state true
	I1020 00:07:55.810827  217964 host.go:66] Checking if "no-preload-20211020000033-6808" exists ...
	I1020 00:07:55.813722  217964 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1020 00:07:55.811173  217964 cli_runner.go:115] Run: docker container inspect no-preload-20211020000033-6808 --format={{.State.Status}}
	I1020 00:07:55.813840  217964 addons.go:337] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 00:07:55.813855  217964 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1020 00:07:55.815456  217964 out.go:176]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I1020 00:07:55.815523  217964 addons.go:337] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1020 00:07:55.815536  217964 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I1020 00:07:55.813904  217964 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211020000033-6808
	I1020 00:07:55.815582  217964 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211020000033-6808
	I1020 00:07:55.829773  217964 node_ready.go:35] waiting up to 6m0s for node "no-preload-20211020000033-6808" to be "Ready" ...
	I1020 00:07:55.830179  217964 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1020 00:07:55.833048  217964 node_ready.go:49] node "no-preload-20211020000033-6808" has status "Ready":"True"
	I1020 00:07:55.833065  217964 node_ready.go:38] duration metric: took 3.252993ms waiting for node "no-preload-20211020000033-6808" to be "Ready" ...
	I1020 00:07:55.833076  217964 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1020 00:07:55.838688  217964 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-cfgnn" in "kube-system" namespace to be "Ready" ...
	I1020 00:07:55.873133  217964 addons.go:337] installing /etc/kubernetes/addons/storageclass.yaml
	I1020 00:07:55.873158  217964 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1020 00:07:55.873213  217964 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211020000033-6808
	I1020 00:07:55.879925  217964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33000 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/no-preload-20211020000033-6808/id_rsa Username:docker}
	I1020 00:07:55.883039  217964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33000 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/no-preload-20211020000033-6808/id_rsa Username:docker}
	I1020 00:07:55.890722  217964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33000 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/no-preload-20211020000033-6808/id_rsa Username:docker}
	I1020 00:07:55.923946  217964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33000 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/no-preload-20211020000033-6808/id_rsa Username:docker}
	I1020 00:07:51.846390  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:54.346641  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:56.129047  217964 addons.go:337] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1020 00:07:56.129076  217964 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1020 00:07:56.129112  217964 addons.go:337] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1020 00:07:56.129127  217964 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I1020 00:07:56.130934  217964 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1020 00:07:56.131644  217964 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 00:07:56.144939  217964 addons.go:337] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1020 00:07:56.144982  217964 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1020 00:07:56.228011  217964 addons.go:337] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1020 00:07:56.228041  217964 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I1020 00:07:56.233470  217964 addons.go:337] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1020 00:07:56.233491  217964 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1020 00:07:56.315009  217964 start.go:729] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I1020 00:07:56.318511  217964 addons.go:337] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1020 00:07:56.318542  217964 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I1020 00:07:56.326408  217964 addons.go:337] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1020 00:07:56.326432  217964 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I1020 00:07:56.415567  217964 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1020 00:07:56.418479  217964 addons.go:337] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1020 00:07:56.418503  217964 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1020 00:07:56.435372  217964 addons.go:337] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1020 00:07:56.435395  217964 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1020 00:07:56.527801  217964 addons.go:337] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1020 00:07:56.527862  217964 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1020 00:07:56.544511  217964 addons.go:337] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1020 00:07:56.544538  217964 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1020 00:07:56.725289  217964 addons.go:337] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1020 00:07:56.725367  217964 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1020 00:07:56.825193  217964 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1020 00:07:57.433165  217964 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.017530987s)
	I1020 00:07:57.433302  217964 addons.go:375] Verifying addon metrics-server=true in "no-preload-20211020000033-6808"
	I1020 00:07:57.923014  217964 pod_ready.go:102] pod "coredns-78fcd69978-cfgnn" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:58.213490  217964 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.38824505s)
	I1020 00:07:55.711375  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:58.211495  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:56.010655  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:58.510477  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:58.215639  217964 out.go:176] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I1020 00:07:58.215670  217964 addons.go:406] enableAddons completed in 2.480020377s
	I1020 00:08:00.420921  217964 pod_ready.go:102] pod "coredns-78fcd69978-cfgnn" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:56.847781  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:58.848054  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:00.852554  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:02.348310  217964 pod_ready.go:92] pod "coredns-78fcd69978-cfgnn" in "kube-system" namespace has status "Ready":"True"
	I1020 00:08:02.348334  217964 pod_ready.go:81] duration metric: took 6.509614796s waiting for pod "coredns-78fcd69978-cfgnn" in "kube-system" namespace to be "Ready" ...
	I1020 00:08:02.348347  217964 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-20211020000033-6808" in "kube-system" namespace to be "Ready" ...
	I1020 00:08:02.352320  217964 pod_ready.go:92] pod "etcd-no-preload-20211020000033-6808" in "kube-system" namespace has status "Ready":"True"
	I1020 00:08:02.352335  217964 pod_ready.go:81] duration metric: took 3.98209ms waiting for pod "etcd-no-preload-20211020000033-6808" in "kube-system" namespace to be "Ready" ...
	I1020 00:08:02.352350  217964 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-20211020000033-6808" in "kube-system" namespace to be "Ready" ...
	I1020 00:08:02.356449  217964 pod_ready.go:92] pod "kube-apiserver-no-preload-20211020000033-6808" in "kube-system" namespace has status "Ready":"True"
	I1020 00:08:02.356466  217964 pod_ready.go:81] duration metric: took 4.108911ms waiting for pod "kube-apiserver-no-preload-20211020000033-6808" in "kube-system" namespace to be "Ready" ...
	I1020 00:08:02.356477  217964 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-20211020000033-6808" in "kube-system" namespace to be "Ready" ...
	I1020 00:08:02.360474  217964 pod_ready.go:92] pod "kube-controller-manager-no-preload-20211020000033-6808" in "kube-system" namespace has status "Ready":"True"
	I1020 00:08:02.360490  217964 pod_ready.go:81] duration metric: took 4.005248ms waiting for pod "kube-controller-manager-no-preload-20211020000033-6808" in "kube-system" namespace to be "Ready" ...
	I1020 00:08:02.360501  217964 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4hhrk" in "kube-system" namespace to be "Ready" ...
	I1020 00:08:02.365409  217964 pod_ready.go:92] pod "kube-proxy-4hhrk" in "kube-system" namespace has status "Ready":"True"
	I1020 00:08:02.365426  217964 pod_ready.go:81] duration metric: took 4.91854ms waiting for pod "kube-proxy-4hhrk" in "kube-system" namespace to be "Ready" ...
	I1020 00:08:02.365435  217964 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-20211020000033-6808" in "kube-system" namespace to be "Ready" ...
	I1020 00:08:02.745032  217964 pod_ready.go:92] pod "kube-scheduler-no-preload-20211020000033-6808" in "kube-system" namespace has status "Ready":"True"
	I1020 00:08:02.745052  217964 pod_ready.go:81] duration metric: took 379.609406ms waiting for pod "kube-scheduler-no-preload-20211020000033-6808" in "kube-system" namespace to be "Ready" ...
	I1020 00:08:02.745062  217964 pod_ready.go:38] duration metric: took 6.911974423s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1020 00:08:02.745085  217964 api_server.go:51] waiting for apiserver process to appear ...
	I1020 00:08:02.745133  217964 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 00:08:02.769571  217964 api_server.go:71] duration metric: took 7.033945394s to wait for apiserver process to appear ...
	I1020 00:08:02.769596  217964 api_server.go:87] waiting for apiserver healthz status ...
	I1020 00:08:02.769607  217964 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1020 00:08:02.774610  217964 api_server.go:266] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1020 00:08:02.775430  217964 api_server.go:140] control plane version: v1.22.3-rc.0
	I1020 00:08:02.775456  217964 api_server.go:130] duration metric: took 5.850374ms to wait for apiserver health ...
	I1020 00:08:02.775467  217964 system_pods.go:43] waiting for kube-system pods to appear ...
	I1020 00:08:02.948826  217964 system_pods.go:59] 9 kube-system pods found
	I1020 00:08:02.948857  217964 system_pods.go:61] "coredns-78fcd69978-cfgnn" [13e4123e-ecd5-4af3-abf8-f2edc2338e97] Running
	I1020 00:08:02.948866  217964 system_pods.go:61] "etcd-no-preload-20211020000033-6808" [c8a3f71b-8d8b-4eb7-acdd-08ec2df16b25] Running
	I1020 00:08:02.948873  217964 system_pods.go:61] "kindnet-ql4j9" [7a45d44d-39ec-4d0c-9786-5527e9bf2cd3] Running
	I1020 00:08:02.948882  217964 system_pods.go:61] "kube-apiserver-no-preload-20211020000033-6808" [62e00e4f-3a3f-4f4a-9763-441d986e55d7] Running
	I1020 00:08:02.948896  217964 system_pods.go:61] "kube-controller-manager-no-preload-20211020000033-6808" [ed578b8c-ed47-42ba-9326-65b0a93186f4] Running
	I1020 00:08:02.948906  217964 system_pods.go:61] "kube-proxy-4hhrk" [50ade4ef-0316-42ff-977b-f03d8b325bd7] Running
	I1020 00:08:02.948913  217964 system_pods.go:61] "kube-scheduler-no-preload-20211020000033-6808" [4a01822f-284b-4ca8-be6d-6320ab89593e] Running
	I1020 00:08:02.948924  217964 system_pods.go:61] "metrics-server-7c784ccb57-f6x62" [ed106f29-92f6-4616-a800-f9aed70e733d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 00:08:02.948936  217964 system_pods.go:61] "storage-provisioner" [05447f48-bd32-4072-ac75-5fa7c4f412b0] Running
	I1020 00:08:02.948948  217964 system_pods.go:74] duration metric: took 173.474347ms to wait for pod list to return data ...
	I1020 00:08:02.948962  217964 default_sa.go:34] waiting for default service account to be created ...
	I1020 00:08:03.145070  217964 default_sa.go:45] found service account: "default"
	I1020 00:08:03.145092  217964 default_sa.go:55] duration metric: took 196.12081ms for default service account to be created ...
	I1020 00:08:03.145099  217964 system_pods.go:116] waiting for k8s-apps to be running ...
	I1020 00:08:03.349005  217964 system_pods.go:86] 9 kube-system pods found
	I1020 00:08:03.349036  217964 system_pods.go:89] "coredns-78fcd69978-cfgnn" [13e4123e-ecd5-4af3-abf8-f2edc2338e97] Running
	I1020 00:08:03.349044  217964 system_pods.go:89] "etcd-no-preload-20211020000033-6808" [c8a3f71b-8d8b-4eb7-acdd-08ec2df16b25] Running
	I1020 00:08:03.349052  217964 system_pods.go:89] "kindnet-ql4j9" [7a45d44d-39ec-4d0c-9786-5527e9bf2cd3] Running
	I1020 00:08:03.349059  217964 system_pods.go:89] "kube-apiserver-no-preload-20211020000033-6808" [62e00e4f-3a3f-4f4a-9763-441d986e55d7] Running
	I1020 00:08:03.349069  217964 system_pods.go:89] "kube-controller-manager-no-preload-20211020000033-6808" [ed578b8c-ed47-42ba-9326-65b0a93186f4] Running
	I1020 00:08:03.349078  217964 system_pods.go:89] "kube-proxy-4hhrk" [50ade4ef-0316-42ff-977b-f03d8b325bd7] Running
	I1020 00:08:03.349086  217964 system_pods.go:89] "kube-scheduler-no-preload-20211020000033-6808" [4a01822f-284b-4ca8-be6d-6320ab89593e] Running
	I1020 00:08:03.349097  217964 system_pods.go:89] "metrics-server-7c784ccb57-f6x62" [ed106f29-92f6-4616-a800-f9aed70e733d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 00:08:03.349108  217964 system_pods.go:89] "storage-provisioner" [05447f48-bd32-4072-ac75-5fa7c4f412b0] Running
	I1020 00:08:03.349120  217964 system_pods.go:126] duration metric: took 204.015969ms to wait for k8s-apps to be running ...
	I1020 00:08:03.349131  217964 system_svc.go:44] waiting for kubelet service to be running ....
	I1020 00:08:03.349174  217964 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
	I1020 00:08:03.360135  217964 system_svc.go:56] duration metric: took 10.998521ms WaitForService to wait for kubelet.
	I1020 00:08:03.360157  217964 kubeadm.go:547] duration metric: took 7.624534921s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1020 00:08:03.360175  217964 node_conditions.go:102] verifying NodePressure condition ...
	I1020 00:08:03.545634  217964 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I1020 00:08:03.545662  217964 node_conditions.go:123] node cpu capacity is 8
	I1020 00:08:03.545677  217964 node_conditions.go:105] duration metric: took 185.498205ms to run NodePressure ...
	I1020 00:08:03.545689  217964 start.go:231] waiting for startup goroutines ...
	I1020 00:08:03.591899  217964 start.go:462] kubectl: 1.20.5, cluster: 1.22.3-rc.0 (minor skew: 2)
	I1020 00:08:03.596167  217964 out.go:176] 
	W1020 00:08:03.596327  217964 out.go:241] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.3-rc.0.
	I1020 00:08:03.598195  217964 out.go:176]   - Want kubectl v1.22.3-rc.0? Try 'minikube kubectl -- get pods -A'
	I1020 00:08:03.600119  217964 out.go:176] * Done! kubectl is now configured to use "no-preload-20211020000033-6808" cluster and "default" namespace by default
	I1020 00:08:00.711172  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:02.786579  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:00.510709  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:03.013660  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:03.347464  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:05.846255  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:05.212036  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:07.711712  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:05.509870  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:07.510555  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:10.010309  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:07.846723  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:10.346355  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:10.211546  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:12.711063  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:12.010697  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:14.510675  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:12.346781  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:14.846938  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:15.211810  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:17.215154  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:17.009773  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:17.506010  219746 pod_ready.go:81] duration metric: took 4m0.401043188s waiting for pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace to be "Ready" ...
	E1020 00:08:17.506035  219746 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace to be "Ready" (will not retry!)
	I1020 00:08:17.506059  219746 pod_ready.go:38] duration metric: took 4m3.721290611s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1020 00:08:17.506093  219746 kubeadm.go:604] restartCluster took 5m15.062777315s
	W1020 00:08:17.506227  219746 out.go:241] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1020 00:08:17.506266  219746 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.14.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1020 00:08:17.346810  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:19.347735  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:19.711284  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:22.215935  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:21.361777  219746 ssh_runner.go:192] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.14.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (3.855484794s)
	I1020 00:08:21.361853  219746 ssh_runner.go:152] Run: sudo systemctl stop -f kubelet
	I1020 00:08:21.372161  219746 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1020 00:08:21.372221  219746 ssh_runner.go:152] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 00:08:21.396947  219746 cri.go:76] found id: ""
	I1020 00:08:21.397011  219746 ssh_runner.go:152] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1020 00:08:21.404159  219746 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I1020 00:08:21.404233  219746 ssh_runner.go:152] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1020 00:08:21.410834  219746 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1020 00:08:21.410879  219746 ssh_runner.go:243] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.14.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1020 00:08:21.846615  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:23.846773  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:24.711365  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:27.213578  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:26.346501  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:28.846262  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:30.847610  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:33.071488  219746 out.go:203]   - Generating certificates and keys ...
	I1020 00:08:33.074047  219746 out.go:203]   - Booting up control plane ...
	I1020 00:08:33.076569  219746 out.go:203]   - Configuring RBAC rules ...
	I1020 00:08:33.078531  219746 cni.go:93] Creating CNI manager for ""
	I1020 00:08:33.078549  219746 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1020 00:08:29.710624  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:31.710767  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:33.710938  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:33.080226  219746 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I1020 00:08:33.080290  219746 ssh_runner.go:152] Run: stat /opt/cni/bin/portmap
	I1020 00:08:33.083720  219746 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.14.0/kubectl ...
	I1020 00:08:33.083735  219746 ssh_runner.go:319] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I1020 00:08:33.095852  219746 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1020 00:08:33.300572  219746 ssh_runner.go:152] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1020 00:08:33.300646  219746 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:08:33.300663  219746 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl label nodes minikube.k8s.io/version=v1.23.2 minikube.k8s.io/commit=4e20255d8fb01d11f0b16be976bdc5b5661029f6 minikube.k8s.io/name=old-k8s-version-20211020000023-6808 minikube.k8s.io/updated_at=2021_10_20T00_08_33_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:08:33.315863  219746 ops.go:34] apiserver oom_adj: 16
	I1020 00:08:33.315888  219746 ops.go:39] adjusting apiserver oom_adj to -10
	I1020 00:08:33.315903  219746 ssh_runner.go:152] Run: /bin/bash -c "echo -10 | sudo tee /proc/$(pgrep kube-apiserver)/oom_adj"
	I1020 00:08:33.424363  219746 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:08:33.985260  219746 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:08:34.485258  219746 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:08:34.985391  219746 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:08:32.847885  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:35.345784  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:36.210476  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:38.211055  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:35.485080  219746 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:08:35.984927  219746 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:08:36.484734  219746 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:08:36.984836  219746 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:08:37.485084  219746 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:08:37.985614  219746 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:08:38.485004  219746 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:08:38.985630  219746 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:08:39.485598  219746 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:08:39.984957  219746 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:08:37.345941  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:39.846062  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:40.211341  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:42.711586  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:40.485621  219746 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:08:40.985685  219746 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:08:41.484932  219746 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:08:41.985040  219746 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:08:42.485382  219746 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:08:42.985056  219746 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:08:43.484799  219746 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:08:43.985351  219746 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:08:44.485115  219746 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:08:44.985648  219746 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:08:42.346242  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:44.845735  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:45.484713  219746 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:08:45.985006  219746 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:08:46.485030  219746 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:08:46.985024  219746 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:08:47.485096  219746 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:08:47.550489  219746 kubeadm.go:985] duration metric: took 14.249908695s to wait for elevateKubeSystemPrivileges.
	I1020 00:08:47.550521  219746 kubeadm.go:392] StartCluster complete in 5m45.148388324s
	I1020 00:08:47.550537  219746 settings.go:142] acquiring lock: {Name:mkae5b2371003bb83973886295163ea24b8b77a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 00:08:47.550609  219746 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/kubeconfig
	I1020 00:08:47.551529  219746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/kubeconfig: {Name:mk397e230cae60dd4f11434c0167f3868cd3be72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 00:08:48.067140  219746 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "old-k8s-version-20211020000023-6808" rescaled to 1
	I1020 00:08:48.067192  219746 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}
	I1020 00:08:48.069058  219746 out.go:176] * Verifying Kubernetes components...
	I1020 00:08:48.069118  219746 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
	I1020 00:08:48.067244  219746 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1020 00:08:48.067257  219746 addons.go:404] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I1020 00:08:48.069217  219746 addons.go:65] Setting storage-provisioner=true in profile "old-k8s-version-20211020000023-6808"
	I1020 00:08:48.069233  219746 addons.go:65] Setting default-storageclass=true in profile "old-k8s-version-20211020000023-6808"
	I1020 00:08:48.069241  219746 addons.go:153] Setting addon storage-provisioner=true in "old-k8s-version-20211020000023-6808"
	I1020 00:08:48.069245  219746 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-20211020000023-6808"
	W1020 00:08:48.069249  219746 addons.go:165] addon storage-provisioner should already be in state true
	I1020 00:08:48.069282  219746 host.go:66] Checking if "old-k8s-version-20211020000023-6808" exists ...
	I1020 00:08:48.069287  219746 addons.go:65] Setting metrics-server=true in profile "old-k8s-version-20211020000023-6808"
	I1020 00:08:48.069313  219746 addons.go:153] Setting addon metrics-server=true in "old-k8s-version-20211020000023-6808"
	W1020 00:08:48.069327  219746 addons.go:165] addon metrics-server should already be in state true
	I1020 00:08:48.069358  219746 host.go:66] Checking if "old-k8s-version-20211020000023-6808" exists ...
	I1020 00:08:48.069223  219746 addons.go:65] Setting dashboard=true in profile "old-k8s-version-20211020000023-6808"
	I1020 00:08:48.069440  219746 addons.go:153] Setting addon dashboard=true in "old-k8s-version-20211020000023-6808"
	W1020 00:08:48.069451  219746 addons.go:165] addon dashboard should already be in state true
	I1020 00:08:48.069470  219746 host.go:66] Checking if "old-k8s-version-20211020000023-6808" exists ...
	I1020 00:08:48.067458  219746 config.go:176] Loaded profile config "old-k8s-version-20211020000023-6808": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.14.0
	I1020 00:08:48.069580  219746 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211020000023-6808 --format={{.State.Status}}
	I1020 00:08:48.069750  219746 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211020000023-6808 --format={{.State.Status}}
	I1020 00:08:48.069791  219746 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211020000023-6808 --format={{.State.Status}}
	I1020 00:08:48.069927  219746 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211020000023-6808 --format={{.State.Status}}
	I1020 00:08:48.141777  219746 out.go:176]   - Using image kubernetesui/dashboard:v2.3.1
	I1020 00:08:48.143331  219746 out.go:176]   - Using image k8s.gcr.io/echoserver:1.4
	I1020 00:08:48.143404  219746 addons.go:337] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1020 00:08:48.143413  219746 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1020 00:08:48.143463  219746 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211020000023-6808
	I1020 00:08:48.146691  219746 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1020 00:08:44.711630  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:46.712138  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:49.211204  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:48.148100  219746 out.go:176]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I1020 00:08:48.148157  219746 addons.go:337] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1020 00:08:48.148166  219746 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I1020 00:08:48.146814  219746 addons.go:337] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 00:08:48.148215  219746 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211020000023-6808
	I1020 00:08:48.148223  219746 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1020 00:08:48.148255  219746 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211020000023-6808
	I1020 00:08:48.155782  219746 addons.go:153] Setting addon default-storageclass=true in "old-k8s-version-20211020000023-6808"
	W1020 00:08:48.155825  219746 addons.go:165] addon default-storageclass should already be in state true
	I1020 00:08:48.155854  219746 host.go:66] Checking if "old-k8s-version-20211020000023-6808" exists ...
	I1020 00:08:48.156265  219746 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211020000023-6808 --format={{.State.Status}}
	I1020 00:08:48.195107  219746 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-20211020000023-6808" to be "Ready" ...
	I1020 00:08:48.195261  219746 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1020 00:08:48.203014  219746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33010 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/old-k8s-version-20211020000023-6808/id_rsa Username:docker}
	I1020 00:08:48.207510  219746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33010 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/old-k8s-version-20211020000023-6808/id_rsa Username:docker}
	I1020 00:08:48.214268  219746 node_ready.go:49] node "old-k8s-version-20211020000023-6808" has status "Ready":"True"
	I1020 00:08:48.214284  219746 node_ready.go:38] duration metric: took 19.145693ms waiting for node "old-k8s-version-20211020000023-6808" to be "Ready" ...
	I1020 00:08:48.214295  219746 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1020 00:08:48.217880  219746 pod_ready.go:78] waiting up to 6m0s for pod "coredns-fb8b8dccf-cv2xz" in "kube-system" namespace to be "Ready" ...
	I1020 00:08:48.231944  219746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33010 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/old-k8s-version-20211020000023-6808/id_rsa Username:docker}
	I1020 00:08:48.232011  219746 addons.go:337] installing /etc/kubernetes/addons/storageclass.yaml
	I1020 00:08:48.232029  219746 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1020 00:08:48.232093  219746 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211020000023-6808
	I1020 00:08:48.279410  219746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33010 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/old-k8s-version-20211020000023-6808/id_rsa Username:docker}
	I1020 00:08:48.327921  219746 addons.go:337] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1020 00:08:48.327954  219746 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1020 00:08:48.333409  219746 addons.go:337] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1020 00:08:48.333434  219746 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I1020 00:08:48.429149  219746 addons.go:337] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1020 00:08:48.429179  219746 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1020 00:08:48.429303  219746 addons.go:337] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1020 00:08:48.429317  219746 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I1020 00:08:48.434642  219746 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 00:08:48.444421  219746 addons.go:337] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1020 00:08:48.444445  219746 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1020 00:08:48.446726  219746 addons.go:337] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1020 00:08:48.446747  219746 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I1020 00:08:48.530753  219746 addons.go:337] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1020 00:08:48.530778  219746 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I1020 00:08:48.532601  219746 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1020 00:08:48.534196  219746 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1020 00:08:48.613922  219746 addons.go:337] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1020 00:08:48.613950  219746 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1020 00:08:48.630377  219746 addons.go:337] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1020 00:08:48.630443  219746 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1020 00:08:48.720749  219746 addons.go:337] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1020 00:08:48.720781  219746 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1020 00:08:48.729776  219746 start.go:729] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
	I1020 00:08:48.744123  219746 addons.go:337] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1020 00:08:48.744149  219746 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1020 00:08:48.831342  219746 addons.go:337] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1020 00:08:48.831370  219746 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1020 00:08:48.917932  219746 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1020 00:08:49.532853  219746 addons.go:375] Verifying addon metrics-server=true in "old-k8s-version-20211020000023-6808"
	I1020 00:08:49.953921  219746 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.035938266s)
	I1020 00:08:49.956053  219746 out.go:176] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1020 00:08:49.956081  219746 addons.go:406] enableAddons completed in 1.888832208s
	I1020 00:08:46.846330  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:48.847352  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:51.211265  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:53.710696  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:50.238063  219746 pod_ready.go:102] pod "coredns-fb8b8dccf-cv2xz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:52.737706  219746 pod_ready.go:102] pod "coredns-fb8b8dccf-cv2xz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:51.347107  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:53.847545  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:55.711569  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:58.211255  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:55.237490  219746 pod_ready.go:102] pod "coredns-fb8b8dccf-cv2xz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:57.237725  219746 pod_ready.go:102] pod "coredns-fb8b8dccf-cv2xz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:57.737885  219746 pod_ready.go:92] pod "coredns-fb8b8dccf-cv2xz" in "kube-system" namespace has status "Ready":"True"
	I1020 00:08:57.737909  219746 pod_ready.go:81] duration metric: took 9.520008645s waiting for pod "coredns-fb8b8dccf-cv2xz" in "kube-system" namespace to be "Ready" ...
	I1020 00:08:57.737921  219746 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wnrrx" in "kube-system" namespace to be "Ready" ...
	I1020 00:08:57.741506  219746 pod_ready.go:92] pod "kube-proxy-wnrrx" in "kube-system" namespace has status "Ready":"True"
	I1020 00:08:57.741522  219746 pod_ready.go:81] duration metric: took 3.596211ms waiting for pod "kube-proxy-wnrrx" in "kube-system" namespace to be "Ready" ...
	I1020 00:08:57.741530  219746 pod_ready.go:38] duration metric: took 9.527217072s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1020 00:08:57.741549  219746 api_server.go:51] waiting for apiserver process to appear ...
	I1020 00:08:57.741591  219746 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 00:08:57.763388  219746 api_server.go:71] duration metric: took 9.69617489s to wait for apiserver process to appear ...
	I1020 00:08:57.763410  219746 api_server.go:87] waiting for apiserver healthz status ...
	I1020 00:08:57.763421  219746 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1020 00:08:57.767906  219746 api_server.go:266] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1020 00:08:57.768752  219746 api_server.go:140] control plane version: v1.14.0
	I1020 00:08:57.768773  219746 api_server.go:130] duration metric: took 5.356522ms to wait for apiserver health ...
	I1020 00:08:57.768780  219746 system_pods.go:43] waiting for kube-system pods to appear ...
	I1020 00:08:57.771689  219746 system_pods.go:59] 5 kube-system pods found
	I1020 00:08:57.771710  219746 system_pods.go:61] "coredns-fb8b8dccf-cv2xz" [e55f825b-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:08:57.771714  219746 system_pods.go:61] "kindnet-d4px8" [e5c73929-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:08:57.771717  219746 system_pods.go:61] "kube-proxy-wnrrx" [e5c793fd-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:08:57.771725  219746 system_pods.go:61] "metrics-server-8546d8b77b-gg7n5" [e73e3666-3139-11ec-9318-0242c0a84c02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 00:08:57.771729  219746 system_pods.go:61] "storage-provisioner" [e689c982-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:08:57.771764  219746 system_pods.go:74] duration metric: took 2.979011ms to wait for pod list to return data ...
	I1020 00:08:57.771770  219746 default_sa.go:34] waiting for default service account to be created ...
	I1020 00:08:57.773916  219746 default_sa.go:45] found service account: "default"
	I1020 00:08:57.773933  219746 default_sa.go:55] duration metric: took 2.159599ms for default service account to be created ...
	I1020 00:08:57.773939  219746 system_pods.go:116] waiting for k8s-apps to be running ...
	I1020 00:08:57.776457  219746 system_pods.go:86] 5 kube-system pods found
	I1020 00:08:57.776476  219746 system_pods.go:89] "coredns-fb8b8dccf-cv2xz" [e55f825b-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:08:57.776482  219746 system_pods.go:89] "kindnet-d4px8" [e5c73929-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:08:57.776485  219746 system_pods.go:89] "kube-proxy-wnrrx" [e5c793fd-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:08:57.776492  219746 system_pods.go:89] "metrics-server-8546d8b77b-gg7n5" [e73e3666-3139-11ec-9318-0242c0a84c02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 00:08:57.776496  219746 system_pods.go:89] "storage-provisioner" [e689c982-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:08:57.776513  219746 retry.go:31] will retry after 227.257272ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1020 00:08:58.007180  219746 system_pods.go:86] 5 kube-system pods found
	I1020 00:08:58.007205  219746 system_pods.go:89] "coredns-fb8b8dccf-cv2xz" [e55f825b-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:08:58.007211  219746 system_pods.go:89] "kindnet-d4px8" [e5c73929-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:08:58.007215  219746 system_pods.go:89] "kube-proxy-wnrrx" [e5c793fd-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:08:58.007221  219746 system_pods.go:89] "metrics-server-8546d8b77b-gg7n5" [e73e3666-3139-11ec-9318-0242c0a84c02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 00:08:58.007227  219746 system_pods.go:89] "storage-provisioner" [e689c982-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:08:58.007240  219746 retry.go:31] will retry after 307.639038ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1020 00:08:58.319324  219746 system_pods.go:86] 5 kube-system pods found
	I1020 00:08:58.319353  219746 system_pods.go:89] "coredns-fb8b8dccf-cv2xz" [e55f825b-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:08:58.319358  219746 system_pods.go:89] "kindnet-d4px8" [e5c73929-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:08:58.319362  219746 system_pods.go:89] "kube-proxy-wnrrx" [e5c793fd-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:08:58.319369  219746 system_pods.go:89] "metrics-server-8546d8b77b-gg7n5" [e73e3666-3139-11ec-9318-0242c0a84c02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 00:08:58.319373  219746 system_pods.go:89] "storage-provisioner" [e689c982-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:08:58.319387  219746 retry.go:31] will retry after 348.248857ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1020 00:08:58.671300  219746 system_pods.go:86] 5 kube-system pods found
	I1020 00:08:58.671342  219746 system_pods.go:89] "coredns-fb8b8dccf-cv2xz" [e55f825b-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:08:58.671348  219746 system_pods.go:89] "kindnet-d4px8" [e5c73929-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:08:58.671352  219746 system_pods.go:89] "kube-proxy-wnrrx" [e5c793fd-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:08:58.671359  219746 system_pods.go:89] "metrics-server-8546d8b77b-gg7n5" [e73e3666-3139-11ec-9318-0242c0a84c02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 00:08:58.671363  219746 system_pods.go:89] "storage-provisioner" [e689c982-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:08:58.671378  219746 retry.go:31] will retry after 437.769008ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1020 00:08:59.113399  219746 system_pods.go:86] 5 kube-system pods found
	I1020 00:08:59.113430  219746 system_pods.go:89] "coredns-fb8b8dccf-cv2xz" [e55f825b-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:08:59.113437  219746 system_pods.go:89] "kindnet-d4px8" [e5c73929-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:08:59.113441  219746 system_pods.go:89] "kube-proxy-wnrrx" [e5c793fd-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:08:59.113448  219746 system_pods.go:89] "metrics-server-8546d8b77b-gg7n5" [e73e3666-3139-11ec-9318-0242c0a84c02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 00:08:59.113453  219746 system_pods.go:89] "storage-provisioner" [e689c982-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:08:59.113468  219746 retry.go:31] will retry after 665.003868ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1020 00:08:59.782509  219746 system_pods.go:86] 5 kube-system pods found
	I1020 00:08:59.782537  219746 system_pods.go:89] "coredns-fb8b8dccf-cv2xz" [e55f825b-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:08:59.782543  219746 system_pods.go:89] "kindnet-d4px8" [e5c73929-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:08:59.782547  219746 system_pods.go:89] "kube-proxy-wnrrx" [e5c793fd-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:08:59.782554  219746 system_pods.go:89] "metrics-server-8546d8b77b-gg7n5" [e73e3666-3139-11ec-9318-0242c0a84c02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 00:08:59.782559  219746 system_pods.go:89] "storage-provisioner" [e689c982-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:08:59.782573  219746 retry.go:31] will retry after 655.575962ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1020 00:08:56.346022  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:58.346418  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:00.845764  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:00.213065  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:02.711628  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:00.442014  219746 system_pods.go:86] 5 kube-system pods found
	I1020 00:09:00.442039  219746 system_pods.go:89] "coredns-fb8b8dccf-cv2xz" [e55f825b-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:00.442044  219746 system_pods.go:89] "kindnet-d4px8" [e5c73929-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:00.442049  219746 system_pods.go:89] "kube-proxy-wnrrx" [e5c793fd-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:00.442056  219746 system_pods.go:89] "metrics-server-8546d8b77b-gg7n5" [e73e3666-3139-11ec-9318-0242c0a84c02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 00:09:00.442060  219746 system_pods.go:89] "storage-provisioner" [e689c982-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:00.442074  219746 retry.go:31] will retry after 812.142789ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1020 00:09:01.259542  219746 system_pods.go:86] 5 kube-system pods found
	I1020 00:09:01.259569  219746 system_pods.go:89] "coredns-fb8b8dccf-cv2xz" [e55f825b-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:01.259574  219746 system_pods.go:89] "kindnet-d4px8" [e5c73929-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:01.259578  219746 system_pods.go:89] "kube-proxy-wnrrx" [e5c793fd-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:01.259585  219746 system_pods.go:89] "metrics-server-8546d8b77b-gg7n5" [e73e3666-3139-11ec-9318-0242c0a84c02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 00:09:01.259592  219746 system_pods.go:89] "storage-provisioner" [e689c982-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:01.259607  219746 retry.go:31] will retry after 1.109165795s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1020 00:09:02.372295  219746 system_pods.go:86] 5 kube-system pods found
	I1020 00:09:02.372331  219746 system_pods.go:89] "coredns-fb8b8dccf-cv2xz" [e55f825b-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:02.372338  219746 system_pods.go:89] "kindnet-d4px8" [e5c73929-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:02.372343  219746 system_pods.go:89] "kube-proxy-wnrrx" [e5c793fd-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:02.372351  219746 system_pods.go:89] "metrics-server-8546d8b77b-gg7n5" [e73e3666-3139-11ec-9318-0242c0a84c02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 00:09:02.372358  219746 system_pods.go:89] "storage-provisioner" [e689c982-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:02.372376  219746 retry.go:31] will retry after 1.54277181s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1020 00:09:03.919081  219746 system_pods.go:86] 5 kube-system pods found
	I1020 00:09:03.919109  219746 system_pods.go:89] "coredns-fb8b8dccf-cv2xz" [e55f825b-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:03.919116  219746 system_pods.go:89] "kindnet-d4px8" [e5c73929-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:03.919121  219746 system_pods.go:89] "kube-proxy-wnrrx" [e5c793fd-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:03.919130  219746 system_pods.go:89] "metrics-server-8546d8b77b-gg7n5" [e73e3666-3139-11ec-9318-0242c0a84c02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 00:09:03.919137  219746 system_pods.go:89] "storage-provisioner" [e689c982-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:03.919159  219746 retry.go:31] will retry after 2.200241603s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1020 00:09:03.347031  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:05.845922  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:05.210946  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:07.710832  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:06.123880  219746 system_pods.go:86] 5 kube-system pods found
	I1020 00:09:06.123907  219746 system_pods.go:89] "coredns-fb8b8dccf-cv2xz" [e55f825b-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:06.123912  219746 system_pods.go:89] "kindnet-d4px8" [e5c73929-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:06.123916  219746 system_pods.go:89] "kube-proxy-wnrrx" [e5c793fd-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:06.123923  219746 system_pods.go:89] "metrics-server-8546d8b77b-gg7n5" [e73e3666-3139-11ec-9318-0242c0a84c02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 00:09:06.123928  219746 system_pods.go:89] "storage-provisioner" [e689c982-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:06.123945  219746 retry.go:31] will retry after 2.087459713s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1020 00:09:08.214961  219746 system_pods.go:86] 5 kube-system pods found
	I1020 00:09:08.214986  219746 system_pods.go:89] "coredns-fb8b8dccf-cv2xz" [e55f825b-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:08.214991  219746 system_pods.go:89] "kindnet-d4px8" [e5c73929-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:08.214995  219746 system_pods.go:89] "kube-proxy-wnrrx" [e5c793fd-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:08.215002  219746 system_pods.go:89] "metrics-server-8546d8b77b-gg7n5" [e73e3666-3139-11ec-9318-0242c0a84c02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 00:09:08.215006  219746 system_pods.go:89] "storage-provisioner" [e689c982-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:08.215021  219746 retry.go:31] will retry after 2.615099305s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1020 00:09:07.846260  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:10.396130  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:10.211117  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:12.211422  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:10.834352  219746 system_pods.go:86] 5 kube-system pods found
	I1020 00:09:10.834384  219746 system_pods.go:89] "coredns-fb8b8dccf-cv2xz" [e55f825b-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:10.834391  219746 system_pods.go:89] "kindnet-d4px8" [e5c73929-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:10.834398  219746 system_pods.go:89] "kube-proxy-wnrrx" [e5c793fd-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:10.834407  219746 system_pods.go:89] "metrics-server-8546d8b77b-gg7n5" [e73e3666-3139-11ec-9318-0242c0a84c02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 00:09:10.834414  219746 system_pods.go:89] "storage-provisioner" [e689c982-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:10.834436  219746 retry.go:31] will retry after 4.097406471s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1020 00:09:14.936447  219746 system_pods.go:86] 5 kube-system pods found
	I1020 00:09:14.936470  219746 system_pods.go:89] "coredns-fb8b8dccf-cv2xz" [e55f825b-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:14.936475  219746 system_pods.go:89] "kindnet-d4px8" [e5c73929-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:14.936479  219746 system_pods.go:89] "kube-proxy-wnrrx" [e5c793fd-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:14.936485  219746 system_pods.go:89] "metrics-server-8546d8b77b-gg7n5" [e73e3666-3139-11ec-9318-0242c0a84c02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 00:09:14.936490  219746 system_pods.go:89] "storage-provisioner" [e689c982-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:14.936505  219746 retry.go:31] will retry after 3.880319712s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1020 00:09:12.845655  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:14.846612  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:14.711730  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:17.211626  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:18.823278  219746 system_pods.go:86] 5 kube-system pods found
	I1020 00:09:18.823303  219746 system_pods.go:89] "coredns-fb8b8dccf-cv2xz" [e55f825b-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:18.823308  219746 system_pods.go:89] "kindnet-d4px8" [e5c73929-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:18.823312  219746 system_pods.go:89] "kube-proxy-wnrrx" [e5c793fd-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:18.823321  219746 system_pods.go:89] "metrics-server-8546d8b77b-gg7n5" [e73e3666-3139-11ec-9318-0242c0a84c02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 00:09:18.823325  219746 system_pods.go:89] "storage-provisioner" [e689c982-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:18.823340  219746 retry.go:31] will retry after 6.722686426s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1020 00:09:16.846639  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:19.345981  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:19.711337  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:22.211286  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:24.211657  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:21.846122  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:23.846646  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:25.848464  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:26.711508  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:29.211288  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:25.549748  219746 system_pods.go:86] 5 kube-system pods found
	I1020 00:09:25.549776  219746 system_pods.go:89] "coredns-fb8b8dccf-cv2xz" [e55f825b-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:25.549781  219746 system_pods.go:89] "kindnet-d4px8" [e5c73929-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:25.549785  219746 system_pods.go:89] "kube-proxy-wnrrx" [e5c793fd-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:25.549792  219746 system_pods.go:89] "metrics-server-8546d8b77b-gg7n5" [e73e3666-3139-11ec-9318-0242c0a84c02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 00:09:25.549797  219746 system_pods.go:89] "storage-provisioner" [e689c982-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:25.549812  219746 retry.go:31] will retry after 7.804314206s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1020 00:09:28.346283  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:30.845629  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:31.710942  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:34.212171  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:33.358144  219746 system_pods.go:86] 5 kube-system pods found
	I1020 00:09:33.358169  219746 system_pods.go:89] "coredns-fb8b8dccf-cv2xz" [e55f825b-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:33.358175  219746 system_pods.go:89] "kindnet-d4px8" [e5c73929-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:33.358179  219746 system_pods.go:89] "kube-proxy-wnrrx" [e5c793fd-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:33.358186  219746 system_pods.go:89] "metrics-server-8546d8b77b-gg7n5" [e73e3666-3139-11ec-9318-0242c0a84c02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 00:09:33.358192  219746 system_pods.go:89] "storage-provisioner" [e689c982-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:33.358210  219746 retry.go:31] will retry after 8.98756758s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1020 00:09:32.846280  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:35.346072  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:36.711106  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:38.711680  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:37.346103  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:39.346525  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:41.211241  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:43.710261  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:42.350689  219746 system_pods.go:86] 7 kube-system pods found
	I1020 00:09:42.350716  219746 system_pods.go:89] "coredns-fb8b8dccf-cv2xz" [e55f825b-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:42.350721  219746 system_pods.go:89] "kindnet-d4px8" [e5c73929-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:42.350726  219746 system_pods.go:89] "kube-controller-manager-old-k8s-version-20211020000023-6808" [01bf202a-313a-11ec-9318-0242c0a84c02] Pending
	I1020 00:09:42.350729  219746 system_pods.go:89] "kube-proxy-wnrrx" [e5c793fd-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:42.350733  219746 system_pods.go:89] "kube-scheduler-old-k8s-version-20211020000023-6808" [038886fe-313a-11ec-9318-0242c0a84c02] Pending
	I1020 00:09:42.350741  219746 system_pods.go:89] "metrics-server-8546d8b77b-gg7n5" [e73e3666-3139-11ec-9318-0242c0a84c02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 00:09:42.350759  219746 system_pods.go:89] "storage-provisioner" [e689c982-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:42.350774  219746 retry.go:31] will retry after 8.483786333s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1020 00:09:41.846596  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:44.345879  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:45.341698  234720 pod_ready.go:81] duration metric: took 4m0.385310648s waiting for pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace to be "Ready" ...
	E1020 00:09:45.341722  234720 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace to be "Ready" (will not retry!)
	I1020 00:09:45.341744  234720 pod_ready.go:38] duration metric: took 4m13.983069507s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1020 00:09:45.341772  234720 kubeadm.go:604] restartCluster took 4m31.116400212s
	W1020 00:09:45.341897  234720 out.go:241] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1020 00:09:45.341923  234720 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1020 00:09:48.648780  234720 ssh_runner.go:192] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (3.306837167s)
	I1020 00:09:48.648848  234720 ssh_runner.go:152] Run: sudo systemctl stop -f kubelet
	I1020 00:09:48.659189  234720 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1020 00:09:48.659255  234720 ssh_runner.go:152] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 00:09:48.682898  234720 cri.go:76] found id: ""
	I1020 00:09:48.682953  234720 ssh_runner.go:152] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1020 00:09:48.689781  234720 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I1020 00:09:48.689847  234720 ssh_runner.go:152] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1020 00:09:48.696235  234720 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1020 00:09:48.696275  234720 ssh_runner.go:243] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1020 00:09:45.713715  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:48.211644  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:48.959025  234720 out.go:203]   - Generating certificates and keys ...
	I1020 00:09:49.766256  234720 out.go:203]   - Booting up control plane ...
	I1020 00:09:50.710920  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:52.711147  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:50.839010  219746 system_pods.go:86] 8 kube-system pods found
	I1020 00:09:50.839036  219746 system_pods.go:89] "coredns-fb8b8dccf-cv2xz" [e55f825b-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:50.839041  219746 system_pods.go:89] "kindnet-d4px8" [e5c73929-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:50.839045  219746 system_pods.go:89] "kube-apiserver-old-k8s-version-20211020000023-6808" [071c1402-313a-11ec-9318-0242c0a84c02] Pending
	I1020 00:09:50.839049  219746 system_pods.go:89] "kube-controller-manager-old-k8s-version-20211020000023-6808" [01bf202a-313a-11ec-9318-0242c0a84c02] Running
	I1020 00:09:50.839053  219746 system_pods.go:89] "kube-proxy-wnrrx" [e5c793fd-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:50.839057  219746 system_pods.go:89] "kube-scheduler-old-k8s-version-20211020000023-6808" [038886fe-313a-11ec-9318-0242c0a84c02] Running
	I1020 00:09:50.839064  219746 system_pods.go:89] "metrics-server-8546d8b77b-gg7n5" [e73e3666-3139-11ec-9318-0242c0a84c02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 00:09:50.839068  219746 system_pods.go:89] "storage-provisioner" [e689c982-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:50.839083  219746 retry.go:31] will retry after 11.506963942s: missing components: etcd, kube-apiserver
	I1020 00:09:54.711542  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:57.211505  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:10:02.334254  234720 out.go:203]   - Configuring RBAC rules ...
	I1020 00:10:02.747781  234720 cni.go:93] Creating CNI manager for ""
	I1020 00:10:02.747804  234720 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1020 00:09:59.711870  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:10:02.211571  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:10:04.211770  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:10:02.351168  219746 system_pods.go:86] 9 kube-system pods found
	I1020 00:10:02.351202  219746 system_pods.go:89] "coredns-fb8b8dccf-cv2xz" [e55f825b-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:10:02.351211  219746 system_pods.go:89] "etcd-old-k8s-version-20211020000023-6808" [0daacb3c-313a-11ec-9318-0242c0a84c02] Pending
	I1020 00:10:02.351217  219746 system_pods.go:89] "kindnet-d4px8" [e5c73929-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:10:02.351224  219746 system_pods.go:89] "kube-apiserver-old-k8s-version-20211020000023-6808" [071c1402-313a-11ec-9318-0242c0a84c02] Running
	I1020 00:10:02.351232  219746 system_pods.go:89] "kube-controller-manager-old-k8s-version-20211020000023-6808" [01bf202a-313a-11ec-9318-0242c0a84c02] Running
	I1020 00:10:02.351238  219746 system_pods.go:89] "kube-proxy-wnrrx" [e5c793fd-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:10:02.351245  219746 system_pods.go:89] "kube-scheduler-old-k8s-version-20211020000023-6808" [038886fe-313a-11ec-9318-0242c0a84c02] Running
	I1020 00:10:02.351261  219746 system_pods.go:89] "metrics-server-8546d8b77b-gg7n5" [e73e3666-3139-11ec-9318-0242c0a84c02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 00:10:02.351271  219746 system_pods.go:89] "storage-provisioner" [e689c982-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:10:02.351289  219746 retry.go:31] will retry after 18.277510002s: missing components: etcd
	I1020 00:10:02.749516  234720 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I1020 00:10:02.749566  234720 ssh_runner.go:152] Run: stat /opt/cni/bin/portmap
	I1020 00:10:02.753044  234720 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.22.2/kubectl ...
	I1020 00:10:02.753063  234720 ssh_runner.go:319] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I1020 00:10:02.765379  234720 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1020 00:10:02.919931  234720 ssh_runner.go:152] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1020 00:10:02.919985  234720 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl label nodes minikube.k8s.io/version=v1.23.2 minikube.k8s.io/commit=4e20255d8fb01d11f0b16be976bdc5b5661029f6 minikube.k8s.io/name=default-k8s-different-port-20211020000323-6808 minikube.k8s.io/updated_at=2021_10_20T00_10_02_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:10:02.919986  234720 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:10:02.935306  234720 ops.go:34] apiserver oom_adj: -16
	I1020 00:10:03.021467  234720 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:10:03.578712  234720 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:10:04.079054  234720 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:10:04.579437  234720 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:10:05.078582  234720 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:10:05.578722  234720 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:10:06.078723  234720 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:10:06.711006  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:10:09.211032  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:10:06.578539  234720 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:10:07.079219  234720 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:10:07.579268  234720 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:10:08.078686  234720 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:10:08.579083  234720 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:10:09.078510  234720 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:10:09.578920  234720 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:10:10.078933  234720 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:10:10.579323  234720 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:10:11.079521  234720 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:10:11.711335  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:10:14.211886  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:10:11.578763  234720 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:10:12.079233  234720 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:10:12.578496  234720 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:10:13.079313  234720 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:10:13.578672  234720 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:10:14.078688  234720 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:10:14.579315  234720 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:10:15.078552  234720 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:10:15.578465  234720 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:10:16.078518  234720 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:10:16.138441  234720 kubeadm.go:985] duration metric: took 13.218517339s to wait for elevateKubeSystemPrivileges.
	I1020 00:10:16.138470  234720 kubeadm.go:392] StartCluster complete in 5m1.959184868s
	I1020 00:10:16.138486  234720 settings.go:142] acquiring lock: {Name:mkae5b2371003bb83973886295163ea24b8b77a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 00:10:16.138564  234720 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/kubeconfig
	I1020 00:10:16.139606  234720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/kubeconfig: {Name:mk397e230cae60dd4f11434c0167f3868cd3be72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 00:10:16.658197  234720 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20211020000323-6808" rescaled to 1
	I1020 00:10:16.658247  234720 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.22.2 ControlPlane:true Worker:true}
	I1020 00:10:16.660424  234720 out.go:176] * Verifying Kubernetes components...
	I1020 00:10:16.660472  234720 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
	I1020 00:10:16.658307  234720 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1020 00:10:16.658320  234720 addons.go:404] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I1020 00:10:16.660599  234720 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20211020000323-6808"
	I1020 00:10:16.660617  234720 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20211020000323-6808"
	I1020 00:10:16.660618  234720 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20211020000323-6808"
	I1020 00:10:16.660628  234720 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20211020000323-6808"
	I1020 00:10:16.660635  234720 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20211020000323-6808"
	I1020 00:10:16.660636  234720 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20211020000323-6808"
	W1020 00:10:16.660643  234720 addons.go:165] addon dashboard should already be in state true
	I1020 00:10:16.660672  234720 host.go:66] Checking if "default-k8s-different-port-20211020000323-6808" exists ...
	W1020 00:10:16.660622  234720 addons.go:165] addon storage-provisioner should already be in state true
	I1020 00:10:16.660706  234720 host.go:66] Checking if "default-k8s-different-port-20211020000323-6808" exists ...
	I1020 00:10:16.658474  234720 config.go:176] Loaded profile config "default-k8s-different-port-20211020000323-6808": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.2
	I1020 00:10:16.660676  234720 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20211020000323-6808"
	I1020 00:10:16.660846  234720 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20211020000323-6808"
	W1020 00:10:16.660856  234720 addons.go:165] addon metrics-server should already be in state true
	I1020 00:10:16.660881  234720 host.go:66] Checking if "default-k8s-different-port-20211020000323-6808" exists ...
	I1020 00:10:16.660979  234720 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211020000323-6808 --format={{.State.Status}}
	I1020 00:10:16.661146  234720 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211020000323-6808 --format={{.State.Status}}
	I1020 00:10:16.661162  234720 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211020000323-6808 --format={{.State.Status}}
	I1020 00:10:16.661289  234720 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211020000323-6808 --format={{.State.Status}}
	I1020 00:10:16.724367  234720 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20211020000323-6808"
	W1020 00:10:16.724394  234720 addons.go:165] addon default-storageclass should already be in state true
	I1020 00:10:16.724422  234720 host.go:66] Checking if "default-k8s-different-port-20211020000323-6808" exists ...
	I1020 00:10:16.724910  234720 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211020000323-6808 --format={{.State.Status}}
	I1020 00:10:16.727583  234720 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1020 00:10:16.729827  234720 out.go:176]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I1020 00:10:16.727698  234720 addons.go:337] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 00:10:16.729874  234720 addons.go:337] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1020 00:10:16.729878  234720 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1020 00:10:16.729883  234720 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I1020 00:10:16.731406  234720 out.go:176]   - Using image kubernetesui/dashboard:v2.3.1
	I1020 00:10:16.729928  234720 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211020000323-6808
	I1020 00:10:16.729933  234720 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211020000323-6808
	I1020 00:10:16.713586  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:10:19.210895  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:10:16.733093  234720 out.go:176]   - Using image k8s.gcr.io/echoserver:1.4
	I1020 00:10:16.733162  234720 addons.go:337] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1020 00:10:16.733176  234720 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1020 00:10:16.733230  234720 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211020000323-6808
	I1020 00:10:16.738519  234720 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20211020000323-6808" to be "Ready" ...
	I1020 00:10:16.738946  234720 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.22.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1020 00:10:16.741498  234720 node_ready.go:49] node "default-k8s-different-port-20211020000323-6808" has status "Ready":"True"
	I1020 00:10:16.741523  234720 node_ready.go:38] duration metric: took 2.977399ms waiting for node "default-k8s-different-port-20211020000323-6808" to be "Ready" ...
	I1020 00:10:16.741534  234720 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1020 00:10:16.753083  234720 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-dh86x" in "kube-system" namespace to be "Ready" ...
	I1020 00:10:16.793382  234720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33025 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/default-k8s-different-port-20211020000323-6808/id_rsa Username:docker}
	I1020 00:10:16.794997  234720 addons.go:337] installing /etc/kubernetes/addons/storageclass.yaml
	I1020 00:10:16.795019  234720 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1020 00:10:16.795071  234720 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211020000323-6808
	I1020 00:10:16.795310  234720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33025 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/default-k8s-different-port-20211020000323-6808/id_rsa Username:docker}
	I1020 00:10:16.795316  234720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33025 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/default-k8s-different-port-20211020000323-6808/id_rsa Username:docker}
	I1020 00:10:16.856577  234720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33025 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/default-k8s-different-port-20211020000323-6808/id_rsa Username:docker}
	I1020 00:10:16.926352  234720 addons.go:337] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1020 00:10:16.926378  234720 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I1020 00:10:16.927216  234720 addons.go:337] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1020 00:10:16.927234  234720 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1020 00:10:16.927252  234720 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 00:10:16.939653  234720 addons.go:337] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1020 00:10:16.939675  234720 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I1020 00:10:16.942519  234720 addons.go:337] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1020 00:10:16.942535  234720 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1020 00:10:16.954050  234720 addons.go:337] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1020 00:10:16.954070  234720 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I1020 00:10:17.023914  234720 addons.go:337] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1020 00:10:17.023996  234720 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1020 00:10:17.027519  234720 start.go:729] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I1020 00:10:17.112522  234720 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1020 00:10:17.114129  234720 addons.go:337] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1020 00:10:17.114181  234720 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I1020 00:10:17.119679  234720 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1020 00:10:17.138032  234720 addons.go:337] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1020 00:10:17.138054  234720 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1020 00:10:17.235797  234720 addons.go:337] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1020 00:10:17.235834  234720 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1020 00:10:17.319900  234720 addons.go:337] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1020 00:10:17.319926  234720 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1020 00:10:17.413422  234720 addons.go:337] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1020 00:10:17.413452  234720 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1020 00:10:17.513251  234720 addons.go:337] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1020 00:10:17.513281  234720 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1020 00:10:17.613735  234720 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1020 00:10:18.232785  234720 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.113068494s)
	I1020 00:10:18.232821  234720 addons.go:375] Verifying addon metrics-server=true in "default-k8s-different-port-20211020000323-6808"
	I1020 00:10:18.824654  234720 pod_ready.go:102] pod "coredns-78fcd69978-dh86x" in "kube-system" namespace has status "Ready":"False"
	I1020 00:10:19.422477  234720 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.808672424s)
	I1020 00:10:20.635136  219746 system_pods.go:86] 9 kube-system pods found
	I1020 00:10:20.635173  219746 system_pods.go:89] "coredns-fb8b8dccf-cv2xz" [e55f825b-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:10:20.635182  219746 system_pods.go:89] "etcd-old-k8s-version-20211020000023-6808" [0daacb3c-313a-11ec-9318-0242c0a84c02] Running
	I1020 00:10:20.635194  219746 system_pods.go:89] "kindnet-d4px8" [e5c73929-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:10:20.635201  219746 system_pods.go:89] "kube-apiserver-old-k8s-version-20211020000023-6808" [071c1402-313a-11ec-9318-0242c0a84c02] Running
	I1020 00:10:20.635208  219746 system_pods.go:89] "kube-controller-manager-old-k8s-version-20211020000023-6808" [01bf202a-313a-11ec-9318-0242c0a84c02] Running
	I1020 00:10:20.635215  219746 system_pods.go:89] "kube-proxy-wnrrx" [e5c793fd-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:10:20.635226  219746 system_pods.go:89] "kube-scheduler-old-k8s-version-20211020000023-6808" [038886fe-313a-11ec-9318-0242c0a84c02] Running
	I1020 00:10:20.635242  219746 system_pods.go:89] "metrics-server-8546d8b77b-gg7n5" [e73e3666-3139-11ec-9318-0242c0a84c02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 00:10:20.635249  219746 system_pods.go:89] "storage-provisioner" [e689c982-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:10:20.635260  219746 system_pods.go:126] duration metric: took 1m22.861315984s to wait for k8s-apps to be running ...
	I1020 00:10:20.635275  219746 system_svc.go:44] waiting for kubelet service to be running ....
	I1020 00:10:20.635335  219746 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
	I1020 00:10:20.647789  219746 system_svc.go:56] duration metric: took 12.50524ms WaitForService to wait for kubelet.
	I1020 00:10:20.647860  219746 kubeadm.go:547] duration metric: took 1m32.580603826s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1020 00:10:20.647886  219746 node_conditions.go:102] verifying NodePressure condition ...
	I1020 00:10:20.650510  219746 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I1020 00:10:20.650533  219746 node_conditions.go:123] node cpu capacity is 8
	I1020 00:10:20.650545  219746 node_conditions.go:105] duration metric: took 2.653157ms to run NodePressure ...
	I1020 00:10:20.650558  219746 start.go:231] waiting for startup goroutines ...
	I1020 00:10:20.696114  219746 start.go:462] kubectl: 1.20.5, cluster: 1.14.0 (minor skew: 6)
	I1020 00:10:20.698415  219746 out.go:176] 
	W1020 00:10:20.698573  219746 out.go:241] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.14.0.
	I1020 00:10:20.700020  219746 out.go:176]   - Want kubectl v1.14.0? Try 'minikube kubectl -- get pods -A'
	I1020 00:10:20.701497  219746 out.go:176] * Done! kubectl is now configured to use "old-k8s-version-20211020000023-6808" cluster and "default" namespace by default
	I1020 00:10:19.424459  234720 out.go:176] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1020 00:10:19.424490  234720 addons.go:406] enableAddons completed in 2.766176689s
	I1020 00:10:21.317262  234720 pod_ready.go:102] pod "coredns-78fcd69978-dh86x" in "kube-system" namespace has status "Ready":"False"
	I1020 00:10:22.267399  234720 pod_ready.go:92] pod "coredns-78fcd69978-dh86x" in "kube-system" namespace has status "Ready":"True"
	I1020 00:10:22.267422  234720 pod_ready.go:81] duration metric: took 5.51431187s waiting for pod "coredns-78fcd69978-dh86x" in "kube-system" namespace to be "Ready" ...
	I1020 00:10:22.267435  234720 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-tsrfv" in "kube-system" namespace to be "Ready" ...
	I1020 00:10:22.314604  234720 pod_ready.go:92] pod "coredns-78fcd69978-tsrfv" in "kube-system" namespace has status "Ready":"True"
	I1020 00:10:22.314627  234720 pod_ready.go:81] duration metric: took 47.185255ms waiting for pod "coredns-78fcd69978-tsrfv" in "kube-system" namespace to be "Ready" ...
	I1020 00:10:22.314640  234720 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-different-port-20211020000323-6808" in "kube-system" namespace to be "Ready" ...
	I1020 00:10:22.319204  234720 pod_ready.go:92] pod "etcd-default-k8s-different-port-20211020000323-6808" in "kube-system" namespace has status "Ready":"True"
	I1020 00:10:22.319226  234720 pod_ready.go:81] duration metric: took 4.576892ms waiting for pod "etcd-default-k8s-different-port-20211020000323-6808" in "kube-system" namespace to be "Ready" ...
	I1020 00:10:22.319240  234720 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-different-port-20211020000323-6808" in "kube-system" namespace to be "Ready" ...
	I1020 00:10:22.323789  234720 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20211020000323-6808" in "kube-system" namespace has status "Ready":"True"
	I1020 00:10:22.323861  234720 pod_ready.go:81] duration metric: took 4.610771ms waiting for pod "kube-apiserver-default-k8s-different-port-20211020000323-6808" in "kube-system" namespace to be "Ready" ...
	I1020 00:10:22.323875  234720 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-different-port-20211020000323-6808" in "kube-system" namespace to be "Ready" ...
	I1020 00:10:22.329767  234720 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20211020000323-6808" in "kube-system" namespace has status "Ready":"True"
	I1020 00:10:22.329784  234720 pod_ready.go:81] duration metric: took 5.900165ms waiting for pod "kube-controller-manager-default-k8s-different-port-20211020000323-6808" in "kube-system" namespace to be "Ready" ...
	I1020 00:10:22.329795  234720 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rknr2" in "kube-system" namespace to be "Ready" ...
	I1020 00:10:22.666474  234720 pod_ready.go:92] pod "kube-proxy-rknr2" in "kube-system" namespace has status "Ready":"True"
	I1020 00:10:22.666498  234720 pod_ready.go:81] duration metric: took 336.694675ms waiting for pod "kube-proxy-rknr2" in "kube-system" namespace to be "Ready" ...
	I1020 00:10:22.666510  234720 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-different-port-20211020000323-6808" in "kube-system" namespace to be "Ready" ...
	I1020 00:10:23.066455  234720 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20211020000323-6808" in "kube-system" namespace has status "Ready":"True"
	I1020 00:10:23.066480  234720 pod_ready.go:81] duration metric: took 399.960854ms waiting for pod "kube-scheduler-default-k8s-different-port-20211020000323-6808" in "kube-system" namespace to be "Ready" ...
	I1020 00:10:23.066491  234720 pod_ready.go:38] duration metric: took 6.324944414s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1020 00:10:23.066511  234720 api_server.go:51] waiting for apiserver process to appear ...
	I1020 00:10:23.066554  234720 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 00:10:23.132714  234720 api_server.go:71] duration metric: took 6.474440468s to wait for apiserver process to appear ...
	I1020 00:10:23.132742  234720 api_server.go:87] waiting for apiserver healthz status ...
	I1020 00:10:23.132753  234720 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I1020 00:10:23.138122  234720 api_server.go:266] https://192.168.49.2:8444/healthz returned 200:
	ok
	I1020 00:10:23.139100  234720 api_server.go:140] control plane version: v1.22.2
	I1020 00:10:23.139119  234720 api_server.go:130] duration metric: took 6.370574ms to wait for apiserver health ...
	I1020 00:10:23.139128  234720 system_pods.go:43] waiting for kube-system pods to appear ...
	I1020 00:10:23.269719  234720 system_pods.go:59] 10 kube-system pods found
	I1020 00:10:23.269753  234720 system_pods.go:61] "coredns-78fcd69978-dh86x" [188f6133-ab49-46ad-be91-146ffd93d595] Running
	I1020 00:10:23.269761  234720 system_pods.go:61] "coredns-78fcd69978-tsrfv" [2add0bd9-74d3-45a5-bd11-ef5d70cf45d2] Running
	I1020 00:10:23.269774  234720 system_pods.go:61] "etcd-default-k8s-different-port-20211020000323-6808" [dbcab5b3-3105-4393-9d37-4552c66f5505] Running
	I1020 00:10:23.269780  234720 system_pods.go:61] "kindnet-6gmwx" [2b142d48-e636-4759-86c9-40668c5baf19] Running
	I1020 00:10:23.269786  234720 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20211020000323-6808" [b1577e5b-0e7d-4374-b227-c92ed774560e] Running
	I1020 00:10:23.269792  234720 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20211020000323-6808" [29bd374c-0660-412b-8dd7-d7c475101169] Running
	I1020 00:10:23.269798  234720 system_pods.go:61] "kube-proxy-rknr2" [f68b99a7-f5e6-42cc-8e61-c92d5b5aedaf] Running
	I1020 00:10:23.269805  234720 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20211020000323-6808" [e25b65d2-f014-460b-bb89-e17c0688bf81] Running
	I1020 00:10:23.269815  234720 system_pods.go:61] "metrics-server-7c784ccb57-26598" [efda9d90-3647-45ea-8851-1a0f6d98d293] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 00:10:23.269822  234720 system_pods.go:61] "storage-provisioner" [4ae5bfc6-73b5-4869-934c-6faf5678a670] Running
	I1020 00:10:23.269829  234720 system_pods.go:74] duration metric: took 130.695337ms to wait for pod list to return data ...
	I1020 00:10:23.269840  234720 default_sa.go:34] waiting for default service account to be created ...
	I1020 00:10:23.466521  234720 default_sa.go:45] found service account: "default"
	I1020 00:10:23.466544  234720 default_sa.go:55] duration metric: took 196.697429ms for default service account to be created ...
	I1020 00:10:23.466553  234720 system_pods.go:116] waiting for k8s-apps to be running ...
	I1020 00:10:23.668458  234720 system_pods.go:86] 10 kube-system pods found
	I1020 00:10:23.668483  234720 system_pods.go:89] "coredns-78fcd69978-dh86x" [188f6133-ab49-46ad-be91-146ffd93d595] Running
	I1020 00:10:23.668489  234720 system_pods.go:89] "coredns-78fcd69978-tsrfv" [2add0bd9-74d3-45a5-bd11-ef5d70cf45d2] Running
	I1020 00:10:23.668494  234720 system_pods.go:89] "etcd-default-k8s-different-port-20211020000323-6808" [dbcab5b3-3105-4393-9d37-4552c66f5505] Running
	I1020 00:10:23.668500  234720 system_pods.go:89] "kindnet-6gmwx" [2b142d48-e636-4759-86c9-40668c5baf19] Running
	I1020 00:10:23.668507  234720 system_pods.go:89] "kube-apiserver-default-k8s-different-port-20211020000323-6808" [b1577e5b-0e7d-4374-b227-c92ed774560e] Running
	I1020 00:10:23.668515  234720 system_pods.go:89] "kube-controller-manager-default-k8s-different-port-20211020000323-6808" [29bd374c-0660-412b-8dd7-d7c475101169] Running
	I1020 00:10:23.668522  234720 system_pods.go:89] "kube-proxy-rknr2" [f68b99a7-f5e6-42cc-8e61-c92d5b5aedaf] Running
	I1020 00:10:23.668528  234720 system_pods.go:89] "kube-scheduler-default-k8s-different-port-20211020000323-6808" [e25b65d2-f014-460b-bb89-e17c0688bf81] Running
	I1020 00:10:23.668538  234720 system_pods.go:89] "metrics-server-7c784ccb57-26598" [efda9d90-3647-45ea-8851-1a0f6d98d293] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 00:10:23.668549  234720 system_pods.go:89] "storage-provisioner" [4ae5bfc6-73b5-4869-934c-6faf5678a670] Running
	I1020 00:10:23.668558  234720 system_pods.go:126] duration metric: took 201.99918ms to wait for k8s-apps to be running ...
	I1020 00:10:23.668570  234720 system_svc.go:44] waiting for kubelet service to be running ....
	I1020 00:10:23.668613  234720 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
	I1020 00:10:23.679089  234720 system_svc.go:56] duration metric: took 10.514134ms WaitForService to wait for kubelet.
	I1020 00:10:23.679111  234720 kubeadm.go:547] duration metric: took 7.020842249s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1020 00:10:23.679136  234720 node_conditions.go:102] verifying NodePressure condition ...
	I1020 00:10:23.866793  234720 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I1020 00:10:23.866821  234720 node_conditions.go:123] node cpu capacity is 8
	I1020 00:10:23.866834  234720 node_conditions.go:105] duration metric: took 187.69259ms to run NodePressure ...
	I1020 00:10:23.866847  234720 start.go:231] waiting for startup goroutines ...
	I1020 00:10:23.912895  234720 start.go:462] kubectl: 1.20.5, cluster: 1.22.2 (minor skew: 2)
	I1020 00:10:23.915208  234720 out.go:176] 
	W1020 00:10:23.915412  234720 out.go:241] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.2.
	I1020 00:10:23.917035  234720 out.go:176]   - Want kubectl v1.22.2? Try 'minikube kubectl -- get pods -A'
	I1020 00:10:23.918582  234720 out.go:176] * Done! kubectl is now configured to use "default-k8s-different-port-20211020000323-6808" cluster and "default" namespace by default
	I1020 00:10:21.710877  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:10:23.769719  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:10:26.210694  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:10:28.211097  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:10:30.711547  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:10:32.711934  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:10:34.712546  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:10:37.211912  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:10:39.213345  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:10:41.710961  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:10:44.213511  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:10:46.711643  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:10:49.210819  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:10:51.211641  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:10:53.706588  240589 pod_ready.go:81] duration metric: took 4m0.06039874s waiting for pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace to be "Ready" ...
	E1020 00:10:53.706614  240589 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace to be "Ready" (will not retry!)
	I1020 00:10:53.706638  240589 pod_ready.go:38] duration metric: took 4m9.616806273s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1020 00:10:53.706671  240589 kubeadm.go:604] restartCluster took 4m25.96065055s
	W1020 00:10:53.706790  240589 out.go:241] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1020 00:10:53.706815  240589 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1020 00:10:56.932077  240589 ssh_runner.go:192] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (3.225243695s)
	I1020 00:10:56.932134  240589 ssh_runner.go:152] Run: sudo systemctl stop -f kubelet
	I1020 00:10:56.941684  240589 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1020 00:10:56.941744  240589 ssh_runner.go:152] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 00:10:56.965507  240589 cri.go:76] found id: ""
	I1020 00:10:56.965564  240589 ssh_runner.go:152] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1020 00:10:56.972272  240589 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I1020 00:10:56.972325  240589 ssh_runner.go:152] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1020 00:10:56.978549  240589 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1020 00:10:56.978589  240589 ssh_runner.go:243] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1020 00:10:57.242241  240589 out.go:203]   - Generating certificates and keys ...
	I1020 00:10:57.984541  240589 out.go:203]   - Booting up control plane ...
	I1020 00:11:11.033196  240589 out.go:203]   - Configuring RBAC rules ...
	I1020 00:11:11.446085  240589 cni.go:93] Creating CNI manager for ""
	I1020 00:11:11.446108  240589 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1020 00:11:11.447896  240589 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I1020 00:11:11.447949  240589 ssh_runner.go:152] Run: stat /opt/cni/bin/portmap
	I1020 00:11:11.451383  240589 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.22.2/kubectl ...
	I1020 00:11:11.451398  240589 ssh_runner.go:319] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I1020 00:11:11.463512  240589 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1020 00:11:11.616347  240589 ssh_runner.go:152] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1020 00:11:11.616420  240589 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:11:11.616447  240589 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl label nodes minikube.k8s.io/version=v1.23.2 minikube.k8s.io/commit=4e20255d8fb01d11f0b16be976bdc5b5661029f6 minikube.k8s.io/name=embed-certs-20211020000119-6808 minikube.k8s.io/updated_at=2021_10_20T00_11_11_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:11:11.712450  240589 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:11:11.723496  240589 ops.go:34] apiserver oom_adj: -16
	I1020 00:11:12.267918  240589 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:11:12.767542  240589 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:11:13.268227  240589 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:11:13.767698  240589 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:11:14.267839  240589 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:11:14.768115  240589 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:11:15.267377  240589 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:11:15.767937  240589 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:11:16.267770  240589 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:11:16.767903  240589 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:11:17.267999  240589 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:11:17.768119  240589 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:11:18.267990  240589 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:11:18.767646  240589 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:11:19.267940  240589 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:11:19.768229  240589 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:11:20.267617  240589 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:11:20.767990  240589 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:11:21.267747  240589 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:11:21.768005  240589 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:11:22.267449  240589 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:11:22.767983  240589 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:11:23.268138  240589 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:11:23.767976  240589 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:11:23.820916  240589 kubeadm.go:985] duration metric: took 12.204555651s to wait for elevateKubeSystemPrivileges.
	I1020 00:11:23.820946  240589 kubeadm.go:392] StartCluster complete in 4m56.117845004s
	I1020 00:11:23.820962  240589 settings.go:142] acquiring lock: {Name:mkae5b2371003bb83973886295163ea24b8b77a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 00:11:23.821068  240589 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/kubeconfig
	I1020 00:11:23.821853  240589 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/kubeconfig: {Name:mk397e230cae60dd4f11434c0167f3868cd3be72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 00:11:24.339414  240589 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20211020000119-6808" rescaled to 1
	I1020 00:11:24.339477  240589 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.22.2 ControlPlane:true Worker:true}
	I1020 00:11:24.340998  240589 out.go:176] * Verifying Kubernetes components...
	I1020 00:11:24.341064  240589 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
	I1020 00:11:24.339526  240589 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1020 00:11:24.339559  240589 addons.go:404] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I1020 00:11:24.339714  240589 config.go:176] Loaded profile config "embed-certs-20211020000119-6808": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.2
	I1020 00:11:24.341180  240589 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-20211020000119-6808"
	I1020 00:11:24.341195  240589 addons.go:65] Setting metrics-server=true in profile "embed-certs-20211020000119-6808"
	I1020 00:11:24.341200  240589 addons.go:65] Setting default-storageclass=true in profile "embed-certs-20211020000119-6808"
	I1020 00:11:24.341208  240589 addons.go:153] Setting addon metrics-server=true in "embed-certs-20211020000119-6808"
	W1020 00:11:24.341217  240589 addons.go:165] addon metrics-server should already be in state true
	I1020 00:11:24.341221  240589 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20211020000119-6808"
	I1020 00:11:24.341253  240589 host.go:66] Checking if "embed-certs-20211020000119-6808" exists ...
	I1020 00:11:24.341206  240589 addons.go:153] Setting addon storage-provisioner=true in "embed-certs-20211020000119-6808"
	W1020 00:11:24.341305  240589 addons.go:165] addon storage-provisioner should already be in state true
	I1020 00:11:24.341353  240589 host.go:66] Checking if "embed-certs-20211020000119-6808" exists ...
	I1020 00:11:24.341180  240589 addons.go:65] Setting dashboard=true in profile "embed-certs-20211020000119-6808"
	I1020 00:11:24.341446  240589 addons.go:153] Setting addon dashboard=true in "embed-certs-20211020000119-6808"
	W1020 00:11:24.341455  240589 addons.go:165] addon dashboard should already be in state true
	I1020 00:11:24.341487  240589 host.go:66] Checking if "embed-certs-20211020000119-6808" exists ...
	I1020 00:11:24.341570  240589 cli_runner.go:115] Run: docker container inspect embed-certs-20211020000119-6808 --format={{.State.Status}}
	I1020 00:11:24.341742  240589 cli_runner.go:115] Run: docker container inspect embed-certs-20211020000119-6808 --format={{.State.Status}}
	I1020 00:11:24.341843  240589 cli_runner.go:115] Run: docker container inspect embed-certs-20211020000119-6808 --format={{.State.Status}}
	I1020 00:11:24.341909  240589 cli_runner.go:115] Run: docker container inspect embed-certs-20211020000119-6808 --format={{.State.Status}}
	I1020 00:11:24.397619  240589 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1020 00:11:24.397754  240589 addons.go:337] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 00:11:24.397772  240589 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1020 00:11:24.399019  240589 out.go:176]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I1020 00:11:24.399067  240589 addons.go:337] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1020 00:11:24.399080  240589 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I1020 00:11:24.397823  240589 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211020000119-6808
	I1020 00:11:24.399127  240589 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211020000119-6808
	I1020 00:11:24.402479  240589 out.go:176]   - Using image k8s.gcr.io/echoserver:1.4
	I1020 00:11:24.403896  240589 out.go:176]   - Using image kubernetesui/dashboard:v2.3.1
	I1020 00:11:24.403951  240589 addons.go:337] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1020 00:11:24.403964  240589 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1020 00:11:24.404017  240589 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211020000119-6808
	I1020 00:11:24.415577  240589 addons.go:153] Setting addon default-storageclass=true in "embed-certs-20211020000119-6808"
	W1020 00:11:24.415605  240589 addons.go:165] addon default-storageclass should already be in state true
	I1020 00:11:24.415634  240589 host.go:66] Checking if "embed-certs-20211020000119-6808" exists ...
	I1020 00:11:24.416150  240589 cli_runner.go:115] Run: docker container inspect embed-certs-20211020000119-6808 --format={{.State.Status}}
	I1020 00:11:24.456366  240589 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33030 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/embed-certs-20211020000119-6808/id_rsa Username:docker}
	I1020 00:11:24.457639  240589 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33030 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/embed-certs-20211020000119-6808/id_rsa Username:docker}
	I1020 00:11:24.470756  240589 addons.go:337] installing /etc/kubernetes/addons/storageclass.yaml
	I1020 00:11:24.470776  240589 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1020 00:11:24.470929  240589 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211020000119-6808
	I1020 00:11:24.473391  240589 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33030 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/embed-certs-20211020000119-6808/id_rsa Username:docker}
	I1020 00:11:24.476692  240589 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20211020000119-6808" to be "Ready" ...
	I1020 00:11:24.476775  240589 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.22.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1020 00:11:24.484753  240589 node_ready.go:49] node "embed-certs-20211020000119-6808" has status "Ready":"True"
	I1020 00:11:24.484767  240589 node_ready.go:38] duration metric: took 8.046413ms waiting for node "embed-certs-20211020000119-6808" to be "Ready" ...
	I1020 00:11:24.484781  240589 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1020 00:11:24.511656  240589 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33030 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/embed-certs-20211020000119-6808/id_rsa Username:docker}
	I1020 00:11:24.519087  240589 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-6pnp8" in "kube-system" namespace to be "Ready" ...
	I1020 00:11:24.630109  240589 addons.go:337] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1020 00:11:24.630138  240589 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I1020 00:11:24.631417  240589 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 00:11:24.634155  240589 addons.go:337] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1020 00:11:24.634182  240589 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1020 00:11:24.635309  240589 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1020 00:11:24.727020  240589 addons.go:337] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1020 00:11:24.727051  240589 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I1020 00:11:24.729801  240589 addons.go:337] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1020 00:11:24.729824  240589 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1020 00:11:24.818606  240589 addons.go:337] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1020 00:11:24.818634  240589 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1020 00:11:24.824604  240589 addons.go:337] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1020 00:11:24.824631  240589 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I1020 00:11:24.912894  240589 addons.go:337] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1020 00:11:24.912919  240589 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I1020 00:11:24.925375  240589 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1020 00:11:24.935739  240589 addons.go:337] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1020 00:11:24.935867  240589 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1020 00:11:25.117528  240589 addons.go:337] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1020 00:11:25.117563  240589 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1020 00:11:25.127010  240589 start.go:729] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS
	I1020 00:11:25.214327  240589 addons.go:337] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1020 00:11:25.214408  240589 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1020 00:11:25.327640  240589 addons.go:337] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1020 00:11:25.327677  240589 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1020 00:11:25.426937  240589 addons.go:337] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1020 00:11:25.426971  240589 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1020 00:11:25.512430  240589 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1020 00:11:26.216662  240589 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.291244163s)
	I1020 00:11:26.216701  240589 addons.go:375] Verifying addon metrics-server=true in "embed-certs-20211020000119-6808"
	I1020 00:11:26.626154  240589 pod_ready.go:102] pod "coredns-78fcd69978-6pnp8" in "kube-system" namespace has status "Ready":"False"
	I1020 00:11:26.935722  240589 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.423190206s)
	I1020 00:11:26.937481  240589 out.go:176] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1020 00:11:26.937509  240589 addons.go:406] enableAddons completed in 2.597963961s
	I1020 00:11:29.030588  240589 pod_ready.go:102] pod "coredns-78fcd69978-6pnp8" in "kube-system" namespace has status "Ready":"False"
	I1020 00:11:31.530821  240589 pod_ready.go:102] pod "coredns-78fcd69978-6pnp8" in "kube-system" namespace has status "Ready":"False"
	I1020 00:11:34.030653  240589 pod_ready.go:102] pod "coredns-78fcd69978-6pnp8" in "kube-system" namespace has status "Ready":"False"
	I1020 00:11:36.530059  240589 pod_ready.go:102] pod "coredns-78fcd69978-6pnp8" in "kube-system" namespace has status "Ready":"False"
	I1020 00:11:38.028158  240589 pod_ready.go:97] error getting pod "coredns-78fcd69978-6pnp8" in "kube-system" namespace (skipping!): pods "coredns-78fcd69978-6pnp8" not found
	I1020 00:11:38.028189  240589 pod_ready.go:81] duration metric: took 13.509042653s waiting for pod "coredns-78fcd69978-6pnp8" in "kube-system" namespace to be "Ready" ...
	E1020 00:11:38.028199  240589 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-78fcd69978-6pnp8" in "kube-system" namespace (skipping!): pods "coredns-78fcd69978-6pnp8" not found
	I1020 00:11:38.028205  240589 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-dz9xq" in "kube-system" namespace to be "Ready" ...
	I1020 00:11:38.032342  240589 pod_ready.go:92] pod "coredns-78fcd69978-dz9xq" in "kube-system" namespace has status "Ready":"True"
	I1020 00:11:38.032358  240589 pod_ready.go:81] duration metric: took 4.148783ms waiting for pod "coredns-78fcd69978-dz9xq" in "kube-system" namespace to be "Ready" ...
	I1020 00:11:38.032368  240589 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-20211020000119-6808" in "kube-system" namespace to be "Ready" ...
	I1020 00:11:38.036197  240589 pod_ready.go:92] pod "etcd-embed-certs-20211020000119-6808" in "kube-system" namespace has status "Ready":"True"
	I1020 00:11:38.036212  240589 pod_ready.go:81] duration metric: took 3.838164ms waiting for pod "etcd-embed-certs-20211020000119-6808" in "kube-system" namespace to be "Ready" ...
	I1020 00:11:38.036225  240589 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-20211020000119-6808" in "kube-system" namespace to be "Ready" ...
	I1020 00:11:38.039912  240589 pod_ready.go:92] pod "kube-apiserver-embed-certs-20211020000119-6808" in "kube-system" namespace has status "Ready":"True"
	I1020 00:11:38.039926  240589 pod_ready.go:81] duration metric: took 3.695119ms waiting for pod "kube-apiserver-embed-certs-20211020000119-6808" in "kube-system" namespace to be "Ready" ...
	I1020 00:11:38.039935  240589 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-20211020000119-6808" in "kube-system" namespace to be "Ready" ...
	I1020 00:11:38.043596  240589 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20211020000119-6808" in "kube-system" namespace has status "Ready":"True"
	I1020 00:11:38.043609  240589 pod_ready.go:81] duration metric: took 3.669613ms waiting for pod "kube-controller-manager-embed-certs-20211020000119-6808" in "kube-system" namespace to be "Ready" ...
	I1020 00:11:38.043617  240589 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-74h96" in "kube-system" namespace to be "Ready" ...
	I1020 00:11:38.228457  240589 pod_ready.go:92] pod "kube-proxy-74h96" in "kube-system" namespace has status "Ready":"True"
	I1020 00:11:38.228477  240589 pod_ready.go:81] duration metric: took 184.854795ms waiting for pod "kube-proxy-74h96" in "kube-system" namespace to be "Ready" ...
	I1020 00:11:38.228486  240589 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-20211020000119-6808" in "kube-system" namespace to be "Ready" ...
	I1020 00:11:38.628782  240589 pod_ready.go:92] pod "kube-scheduler-embed-certs-20211020000119-6808" in "kube-system" namespace has status "Ready":"True"
	I1020 00:11:38.628805  240589 pod_ready.go:81] duration metric: took 400.312861ms waiting for pod "kube-scheduler-embed-certs-20211020000119-6808" in "kube-system" namespace to be "Ready" ...
	I1020 00:11:38.628813  240589 pod_ready.go:38] duration metric: took 14.144023407s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1020 00:11:38.628829  240589 api_server.go:51] waiting for apiserver process to appear ...
	I1020 00:11:38.628873  240589 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 00:11:38.650528  240589 api_server.go:71] duration metric: took 14.311015391s to wait for apiserver process to appear ...
	I1020 00:11:38.650547  240589 api_server.go:87] waiting for apiserver healthz status ...
	I1020 00:11:38.650557  240589 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1020 00:11:38.654756  240589 api_server.go:266] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1020 00:11:38.655563  240589 api_server.go:140] control plane version: v1.22.2
	I1020 00:11:38.655581  240589 api_server.go:130] duration metric: took 5.028701ms to wait for apiserver health ...
	I1020 00:11:38.655588  240589 system_pods.go:43] waiting for kube-system pods to appear ...
	I1020 00:11:38.830888  240589 system_pods.go:59] 9 kube-system pods found
	I1020 00:11:38.830918  240589 system_pods.go:61] "coredns-78fcd69978-dz9xq" [f68a2f7c-693f-456d-afd6-3e45c7a1169c] Running
	I1020 00:11:38.830923  240589 system_pods.go:61] "etcd-embed-certs-20211020000119-6808" [b539f6cf-e0da-475a-ad06-cc380dded84a] Running
	I1020 00:11:38.830927  240589 system_pods.go:61] "kindnet-7g4v2" [2d354868-dd64-457b-ab45-716d8513332c] Running
	I1020 00:11:38.830931  240589 system_pods.go:61] "kube-apiserver-embed-certs-20211020000119-6808" [8ee6c0d9-7dd3-48a0-b215-2c15e47338ad] Running
	I1020 00:11:38.830935  240589 system_pods.go:61] "kube-controller-manager-embed-certs-20211020000119-6808" [f339ba15-c4a4-4598-9e6a-865465e70ef7] Running
	I1020 00:11:38.830938  240589 system_pods.go:61] "kube-proxy-74h96" [8ebc3de5-8c9a-460f-bd78-fd4c2cd7d52e] Running
	I1020 00:11:38.830942  240589 system_pods.go:61] "kube-scheduler-embed-certs-20211020000119-6808" [c8277352-5fd0-42f1-812c-f9d29cf44b07] Running
	I1020 00:11:38.830948  240589 system_pods.go:61] "metrics-server-7c784ccb57-4qx7n" [2d5a6732-f488-4534-a9dc-6d0e07139c47] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 00:11:38.830952  240589 system_pods.go:61] "storage-provisioner" [0ebfffac-6694-46da-adfb-99dfd775ad99] Running
	I1020 00:11:38.830958  240589 system_pods.go:74] duration metric: took 175.36582ms to wait for pod list to return data ...
	I1020 00:11:38.830965  240589 default_sa.go:34] waiting for default service account to be created ...
	I1020 00:11:39.029152  240589 default_sa.go:45] found service account: "default"
	I1020 00:11:39.029179  240589 default_sa.go:55] duration metric: took 198.208683ms for default service account to be created ...
	I1020 00:11:39.029190  240589 system_pods.go:116] waiting for k8s-apps to be running ...
	I1020 00:11:39.231552  240589 system_pods.go:86] 9 kube-system pods found
	I1020 00:11:39.231581  240589 system_pods.go:89] "coredns-78fcd69978-dz9xq" [f68a2f7c-693f-456d-afd6-3e45c7a1169c] Running
	I1020 00:11:39.231587  240589 system_pods.go:89] "etcd-embed-certs-20211020000119-6808" [b539f6cf-e0da-475a-ad06-cc380dded84a] Running
	I1020 00:11:39.231591  240589 system_pods.go:89] "kindnet-7g4v2" [2d354868-dd64-457b-ab45-716d8513332c] Running
	I1020 00:11:39.231596  240589 system_pods.go:89] "kube-apiserver-embed-certs-20211020000119-6808" [8ee6c0d9-7dd3-48a0-b215-2c15e47338ad] Running
	I1020 00:11:39.231600  240589 system_pods.go:89] "kube-controller-manager-embed-certs-20211020000119-6808" [f339ba15-c4a4-4598-9e6a-865465e70ef7] Running
	I1020 00:11:39.231604  240589 system_pods.go:89] "kube-proxy-74h96" [8ebc3de5-8c9a-460f-bd78-fd4c2cd7d52e] Running
	I1020 00:11:39.231607  240589 system_pods.go:89] "kube-scheduler-embed-certs-20211020000119-6808" [c8277352-5fd0-42f1-812c-f9d29cf44b07] Running
	I1020 00:11:39.231614  240589 system_pods.go:89] "metrics-server-7c784ccb57-4qx7n" [2d5a6732-f488-4534-a9dc-6d0e07139c47] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 00:11:39.231618  240589 system_pods.go:89] "storage-provisioner" [0ebfffac-6694-46da-adfb-99dfd775ad99] Running
	I1020 00:11:39.231626  240589 system_pods.go:126] duration metric: took 202.430272ms to wait for k8s-apps to be running ...
	I1020 00:11:39.231634  240589 system_svc.go:44] waiting for kubelet service to be running ....
	I1020 00:11:39.231674  240589 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
	I1020 00:11:39.240862  240589 system_svc.go:56] duration metric: took 9.2225ms WaitForService to wait for kubelet.
	I1020 00:11:39.240883  240589 kubeadm.go:547] duration metric: took 14.901374004s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1020 00:11:39.240909  240589 node_conditions.go:102] verifying NodePressure condition ...
	I1020 00:11:39.430550  240589 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I1020 00:11:39.430575  240589 node_conditions.go:123] node cpu capacity is 8
	I1020 00:11:39.430586  240589 node_conditions.go:105] duration metric: took 189.672341ms to run NodePressure ...
	I1020 00:11:39.430597  240589 start.go:231] waiting for startup goroutines ...
	I1020 00:11:39.473773  240589 start.go:462] kubectl: 1.20.5, cluster: 1.22.2 (minor skew: 2)
	I1020 00:11:39.475723  240589 out.go:176] 
	W1020 00:11:39.475933  240589 out.go:241] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.2.
	I1020 00:11:39.477366  240589 out.go:176]   - Want kubectl v1.22.2? Try 'minikube kubectl -- get pods -A'
	I1020 00:11:39.478867  240589 out.go:176] * Done! kubectl is now configured to use "embed-certs-20211020000119-6808" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID
	6bbb4c44400b6       523cad1a4df73       5 seconds ago       Exited              dashboard-metrics-scraper   2                   7c193a6ecff0b
	de8122a69ebdc       e1482a24335a6       25 seconds ago      Running             kubernetes-dashboard        0                   48854236a19a4
	509c6eb7c34d7       6e38f40d628db       26 seconds ago      Running             storage-provisioner         0                   4a43dcd260169
	585d9c2f0d022       8d147537fb7d1       27 seconds ago      Running             coredns                     0                   55fa9a2e203d0
	76d591cc47994       6de166512aa22       28 seconds ago      Running             kindnet-cni                 0                   b0d37b78efd72
	ae35e640ba60e       873127efbc8a7       28 seconds ago      Running             kube-proxy                  0                   b4bcdc2d8941d
	eaa50fd2ebc34       5425bcbd23c54       49 seconds ago      Running             kube-controller-manager     2                   d9d713a115bcd
	f4aa16b4a5bc5       b51ddc1014b04       49 seconds ago      Running             kube-scheduler              2                   8c8fc41fe704c
	e22dc3c6e30ea       0048118155842       49 seconds ago      Running             etcd                        2                   ddd960943a088
	cf3af14936006       e64579b7d8862       49 seconds ago      Running             kube-apiserver              2                   c48d211474888
	
	* 
	* ==> containerd <==
	* -- Logs begin at Wed 2021-10-20 00:06:11 UTC, end at Wed 2021-10-20 00:11:53 UTC. --
	Oct 20 00:11:36 embed-certs-20211020000119-6808 containerd[486]: time="2021-10-20T00:11:36.595140931Z" level=info msg="StopContainer for \"5b267fc588b14ba5760d1fa9658c03ce34eaae59a685d96ef491b6c4ba421bb7\" returns successfully"
	Oct 20 00:11:36 embed-certs-20211020000119-6808 containerd[486]: time="2021-10-20T00:11:36.595722660Z" level=info msg="StopPodSandbox for \"a9a0f99654b7bbfab309603df306b97416034a614438ba7d444c2ef1ad472f47\""
	Oct 20 00:11:36 embed-certs-20211020000119-6808 containerd[486]: time="2021-10-20T00:11:36.595791621Z" level=info msg="Container to stop \"5b267fc588b14ba5760d1fa9658c03ce34eaae59a685d96ef491b6c4ba421bb7\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Oct 20 00:11:36 embed-certs-20211020000119-6808 containerd[486]: time="2021-10-20T00:11:36.697206938Z" level=info msg="TaskExit event &TaskExit{ContainerID:a9a0f99654b7bbfab309603df306b97416034a614438ba7d444c2ef1ad472f47,ID:a9a0f99654b7bbfab309603df306b97416034a614438ba7d444c2ef1ad472f47,Pid:4466,ExitStatus:137,ExitedAt:2021-10-20 00:11:36.696982922 +0000 UTC,XXX_unrecognized:[],}"
	Oct 20 00:11:36 embed-certs-20211020000119-6808 containerd[486]: time="2021-10-20T00:11:36.740312931Z" level=info msg="shim disconnected" id=a9a0f99654b7bbfab309603df306b97416034a614438ba7d444c2ef1ad472f47
	Oct 20 00:11:36 embed-certs-20211020000119-6808 containerd[486]: time="2021-10-20T00:11:36.740545035Z" level=error msg="copy shim log" error="read /proc/self/fd/83: file already closed"
	Oct 20 00:11:36 embed-certs-20211020000119-6808 containerd[486]: time="2021-10-20T00:11:36.835895262Z" level=info msg="TearDown network for sandbox \"a9a0f99654b7bbfab309603df306b97416034a614438ba7d444c2ef1ad472f47\" successfully"
	Oct 20 00:11:36 embed-certs-20211020000119-6808 containerd[486]: time="2021-10-20T00:11:36.835929377Z" level=info msg="StopPodSandbox for \"a9a0f99654b7bbfab309603df306b97416034a614438ba7d444c2ef1ad472f47\" returns successfully"
	Oct 20 00:11:37 embed-certs-20211020000119-6808 containerd[486]: time="2021-10-20T00:11:37.566812143Z" level=info msg="RemoveContainer for \"5b267fc588b14ba5760d1fa9658c03ce34eaae59a685d96ef491b6c4ba421bb7\""
	Oct 20 00:11:37 embed-certs-20211020000119-6808 containerd[486]: time="2021-10-20T00:11:37.573111540Z" level=info msg="RemoveContainer for \"5b267fc588b14ba5760d1fa9658c03ce34eaae59a685d96ef491b6c4ba421bb7\" returns successfully"
	Oct 20 00:11:37 embed-certs-20211020000119-6808 containerd[486]: time="2021-10-20T00:11:37.573560989Z" level=error msg="ContainerStatus for \"5b267fc588b14ba5760d1fa9658c03ce34eaae59a685d96ef491b6c4ba421bb7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5b267fc588b14ba5760d1fa9658c03ce34eaae59a685d96ef491b6c4ba421bb7\": not found"
	Oct 20 00:11:42 embed-certs-20211020000119-6808 containerd[486]: time="2021-10-20T00:11:42.336655138Z" level=info msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Oct 20 00:11:42 embed-certs-20211020000119-6808 containerd[486]: time="2021-10-20T00:11:42.386563867Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host" host=fake.domain
	Oct 20 00:11:42 embed-certs-20211020000119-6808 containerd[486]: time="2021-10-20T00:11:42.387708078Z" level=error msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host"
	Oct 20 00:11:48 embed-certs-20211020000119-6808 containerd[486]: time="2021-10-20T00:11:48.337474342Z" level=info msg="CreateContainer within sandbox \"7c193a6ecff0b9420f709fa7fb5c9ef4b54c9a2b5aa3f2bbb13b722a8fa0093d\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:2,}"
	Oct 20 00:11:48 embed-certs-20211020000119-6808 containerd[486]: time="2021-10-20T00:11:48.381150461Z" level=info msg="CreateContainer within sandbox \"7c193a6ecff0b9420f709fa7fb5c9ef4b54c9a2b5aa3f2bbb13b722a8fa0093d\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:2,} returns container id \"6bbb4c44400b641cc5dd9085550804112419b62d39d659e692e54febb76d61f5\""
	Oct 20 00:11:48 embed-certs-20211020000119-6808 containerd[486]: time="2021-10-20T00:11:48.381659390Z" level=info msg="StartContainer for \"6bbb4c44400b641cc5dd9085550804112419b62d39d659e692e54febb76d61f5\""
	Oct 20 00:11:48 embed-certs-20211020000119-6808 containerd[486]: time="2021-10-20T00:11:48.518394785Z" level=info msg="StartContainer for \"6bbb4c44400b641cc5dd9085550804112419b62d39d659e692e54febb76d61f5\" returns successfully"
	Oct 20 00:11:48 embed-certs-20211020000119-6808 containerd[486]: time="2021-10-20T00:11:48.552319217Z" level=info msg="Finish piping stderr of container \"6bbb4c44400b641cc5dd9085550804112419b62d39d659e692e54febb76d61f5\""
	Oct 20 00:11:48 embed-certs-20211020000119-6808 containerd[486]: time="2021-10-20T00:11:48.552329840Z" level=info msg="Finish piping stdout of container \"6bbb4c44400b641cc5dd9085550804112419b62d39d659e692e54febb76d61f5\""
	Oct 20 00:11:48 embed-certs-20211020000119-6808 containerd[486]: time="2021-10-20T00:11:48.553212581Z" level=info msg="TaskExit event &TaskExit{ContainerID:6bbb4c44400b641cc5dd9085550804112419b62d39d659e692e54febb76d61f5,ID:6bbb4c44400b641cc5dd9085550804112419b62d39d659e692e54febb76d61f5,Pid:5499,ExitStatus:1,ExitedAt:2021-10-20 00:11:48.552958038 +0000 UTC,XXX_unrecognized:[],}"
	Oct 20 00:11:48 embed-certs-20211020000119-6808 containerd[486]: time="2021-10-20T00:11:48.596461537Z" level=info msg="shim disconnected" id=6bbb4c44400b641cc5dd9085550804112419b62d39d659e692e54febb76d61f5
	Oct 20 00:11:48 embed-certs-20211020000119-6808 containerd[486]: time="2021-10-20T00:11:48.596538867Z" level=error msg="copy shim log" error="read /proc/self/fd/99: file already closed"
	Oct 20 00:11:49 embed-certs-20211020000119-6808 containerd[486]: time="2021-10-20T00:11:49.593372371Z" level=info msg="RemoveContainer for \"4730553631cb302673c4a3870c1e964ff5341db998aad13dc0ebbe9e16b8c2f5\""
	Oct 20 00:11:49 embed-certs-20211020000119-6808 containerd[486]: time="2021-10-20T00:11:49.598505615Z" level=info msg="RemoveContainer for \"4730553631cb302673c4a3870c1e964ff5341db998aad13dc0ebbe9e16b8c2f5\" returns successfully"
	
	* 
	* ==> coredns [585d9c2f0d0225bfa0e8824b11ef251ead7f8ddd3f6239f107a6bb341d25a396] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.4
	linux/amd64, go1.16.4, 053c4d5
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20211020000119-6808
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-20211020000119-6808
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4e20255d8fb01d11f0b16be976bdc5b5661029f6
	                    minikube.k8s.io/name=embed-certs-20211020000119-6808
	                    minikube.k8s.io/updated_at=2021_10_20T00_11_11_0700
	                    minikube.k8s.io/version=v1.23.2
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 20 Oct 2021 00:11:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-20211020000119-6808
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 20 Oct 2021 00:11:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 20 Oct 2021 00:11:46 +0000   Wed, 20 Oct 2021 00:11:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 20 Oct 2021 00:11:46 +0000   Wed, 20 Oct 2021 00:11:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 20 Oct 2021 00:11:46 +0000   Wed, 20 Oct 2021 00:11:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 20 Oct 2021 00:11:46 +0000   Wed, 20 Oct 2021 00:11:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    embed-certs-20211020000119-6808
	Capacity:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	System Info:
	  Machine ID:                 bba0be70c47c400ea3cf7733f1c0b4c1
	  System UUID:                52bf727a-d689-4fdd-9046-032112131ad0
	  Boot ID:                    3860adfd-184a-4be3-9030-9440e8dafeac
	  Kernel Version:             4.9.0-16-amd64
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.4.9
	  Kubelet Version:            v1.22.2
	  Kube-Proxy Version:         v1.22.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-78fcd69978-dz9xq                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     29s
	  kube-system                 etcd-embed-certs-20211020000119-6808                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         37s
	  kube-system                 kindnet-7g4v2                                              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      30s
	  kube-system                 kube-apiserver-embed-certs-20211020000119-6808             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         37s
	  kube-system                 kube-controller-manager-embed-certs-20211020000119-6808    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         37s
	  kube-system                 kube-proxy-74h96                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30s
	  kube-system                 kube-scheduler-embed-certs-20211020000119-6808             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         37s
	  kube-system                 metrics-server-7c784ccb57-4qx7n                            100m (1%!)(MISSING)     0 (0%!)(MISSING)      300Mi (0%!)(MISSING)       0 (0%!)(MISSING)         28s
	  kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28s
	  kubernetes-dashboard        dashboard-metrics-scraper-8685c45546-lhdc7                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  kubernetes-dashboard        kubernetes-dashboard-654cf69797-wsz4n                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%!)(MISSING)  100m (1%!)(MISSING)
	  memory             520Mi (1%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From     Message
	  ----    ------                   ----               ----     -------
	  Normal  NodeHasSufficientMemory  50s (x5 over 50s)  kubelet  Node embed-certs-20211020000119-6808 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    50s (x4 over 50s)  kubelet  Node embed-certs-20211020000119-6808 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     50s (x3 over 50s)  kubelet  Node embed-certs-20211020000119-6808 status is now: NodeHasSufficientPID
	  Normal  Starting                 37s                kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  37s                kubelet  Node embed-certs-20211020000119-6808 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    37s                kubelet  Node embed-certs-20211020000119-6808 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     37s                kubelet  Node embed-certs-20211020000119-6808 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  37s                kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeReady                30s                kubelet  Node embed-certs-20211020000119-6808 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.000002] ll header: 00000000: ff ff ff ff ff ff 8a 3d a2 5d 77 67 08 06        .......=.]wg..
	[  +2.604405] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev vethfef873ab
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 72 b6 be 24 f6 05 08 06        ......r..$....
	[  +0.059770] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev veth4452a690
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff ee 46 1b 55 28 54 08 06        .......F.U(T..
	[  +0.747799] IPv4: martian source 10.244.0.9 from 10.244.0.9, on dev veth84a7eb03
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 62 ca a4 d8 86 c9 08 06        ......b.......
	[Oct20 00:10] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev veth3db2c6be
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 96 c2 98 0f a9 2c 08 06        ...........,..
	[  +0.019959] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev veth9bedf5aa
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 76 51 7c 28 fe 04 08 06        ......vQ|(....
	[  +1.699758] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev veth72feb492
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 16 53 98 81 6d 98 08 06        .......S..m...
	[  +0.536253] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev veth80b02c43
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff ba 5b ea d5 c4 e3 08 06        .......[......
	[  +0.439866] IPv4: martian source 10.244.0.9 from 10.244.0.9, on dev veth47588c15
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff ea 43 e7 2d d7 2c 08 06        .......C.-.,..
	[Oct20 00:11] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev vethd6396fc0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 62 48 1b bc 02 c7 08 06        ......bH......
	[  +1.654656] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev vethc38188b6
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 5a 80 82 19 9e b2 08 06        ......Z.......
	[  +0.684157] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev vethd9f4739c
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff ca c7 3a d0 58 2c 08 06        ........:.X,..
	[  +0.036119] IPv4: martian source 10.244.0.9 from 10.244.0.9, on dev veth21a64279
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 4a c9 07 fc a4 ec 08 06        ......J.......
	
	* 
	* ==> etcd [e22dc3c6e30ea67414ef6dfef449c8b17c52ed1cb7934e7050a03f9c5e7b82e1] <==
	* {"level":"info","ts":"2021-10-20T00:11:04.914Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2021-10-20T00:11:04.914Z","caller":"membership/cluster.go:393","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2021-10-20T00:11:04.915Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2021-10-20T00:11:04.915Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2021-10-20T00:11:04.915Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2021-10-20T00:11:04.915Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2021-10-20T00:11:04.915Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2021-10-20T00:11:05.342Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2021-10-20T00:11:05.342Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2021-10-20T00:11:05.342Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2021-10-20T00:11:05.342Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2021-10-20T00:11:05.342Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2021-10-20T00:11:05.342Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2021-10-20T00:11:05.342Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2021-10-20T00:11:05.342Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2021-10-20T00:11:05.343Z","caller":"membership/cluster.go:531","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2021-10-20T00:11:05.343Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2021-10-20T00:11:05.343Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2021-10-20T00:11:05.343Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:embed-certs-20211020000119-6808 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2021-10-20T00:11:05.343Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-10-20T00:11:05.343Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-10-20T00:11:05.343Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2021-10-20T00:11:05.343Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2021-10-20T00:11:05.345Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2021-10-20T00:11:05.345Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	
	* 
	* ==> kernel <==
	*  00:11:53 up  1:54,  0 users,  load average: 0.76, 1.98, 2.69
	Linux embed-certs-20211020000119-6808 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [cf3af14936006cd206aa6c1cbf4b87c4b824199cf7a2249808a7eb272135e6e2] <==
	* I1020 00:11:08.438901       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1020 00:11:08.438924       1 apf_controller.go:304] Running API Priority and Fairness config worker
	I1020 00:11:08.438931       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1020 00:11:08.439119       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I1020 00:11:08.512360       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I1020 00:11:09.337385       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1020 00:11:09.337406       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1020 00:11:09.343381       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I1020 00:11:09.346250       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I1020 00:11:09.346271       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I1020 00:11:09.718163       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1020 00:11:09.747029       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1020 00:11:09.840258       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I1020 00:11:09.841064       1 controller.go:611] quota admission added evaluator for: endpoints
	I1020 00:11:09.844408       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1020 00:11:10.436839       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1020 00:11:11.216689       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1020 00:11:11.246447       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1020 00:11:16.325127       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I1020 00:11:23.991570       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I1020 00:11:24.146482       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	W1020 00:11:28.512115       1 handler_proxy.go:104] no RequestInfo found in the context
	E1020 00:11:28.512203       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1020 00:11:28.512212       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [eaa50fd2ebc348b5c63113ef233fc29de40bfdb3879455d0faf400ba1bb683f1] <==
	* I1020 00:11:24.300529       1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-78fcd69978-dz9xq"
	I1020 00:11:24.314577       1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-78fcd69978-6pnp8"
	I1020 00:11:25.729350       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-7c784ccb57 to 1"
	I1020 00:11:25.741952       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-7c784ccb57-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E1020 00:11:25.823272       1 replica_set.go:536] sync "kube-system/metrics-server-7c784ccb57" failed with pods "metrics-server-7c784ccb57-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I1020 00:11:25.929876       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-7c784ccb57-4qx7n"
	I1020 00:11:26.335970       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-8685c45546 to 1"
	I1020 00:11:26.419301       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E1020 00:11:26.425008       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1020 00:11:26.436593       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1020 00:11:26.438205       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I1020 00:11:26.516628       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-654cf69797 to 1"
	E1020 00:11:26.517921       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1020 00:11:26.518444       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E1020 00:11:26.614198       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1020 00:11:26.614667       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I1020 00:11:26.615684       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-654cf69797" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-654cf69797-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E1020 00:11:26.624203       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-654cf69797" failed with pods "kubernetes-dashboard-654cf69797-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1020 00:11:26.627703       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1020 00:11:26.627774       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E1020 00:11:26.631249       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-654cf69797" failed with pods "kubernetes-dashboard-654cf69797-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1020 00:11:26.631292       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-654cf69797" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-654cf69797-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I1020 00:11:26.640114       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-654cf69797" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-654cf69797-wsz4n"
	I1020 00:11:26.720369       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-8685c45546-lhdc7"
	E1020 00:11:53.505520       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	* 
	* ==> kube-proxy [ae35e640ba60e61409b32639933d3c690f1c17a23f76260188a6519b8b41ecc8] <==
	* I1020 00:11:25.329417       1 node.go:172] Successfully retrieved node IP: 192.168.58.2
	I1020 00:11:25.329468       1 server_others.go:140] Detected node IP 192.168.58.2
	W1020 00:11:25.329490       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
	I1020 00:11:25.529406       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I1020 00:11:25.529458       1 server_others.go:212] Using iptables Proxier.
	I1020 00:11:25.529472       1 server_others.go:219] creating dualStackProxier for iptables.
	W1020 00:11:25.529492       1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I1020 00:11:25.529902       1 server.go:649] Version: v1.22.2
	I1020 00:11:25.531095       1 config.go:224] Starting endpoint slice config controller
	I1020 00:11:25.531114       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I1020 00:11:25.533917       1 config.go:315] Starting service config controller
	I1020 00:11:25.533946       1 shared_informer.go:240] Waiting for caches to sync for service config
	E1020 00:11:25.537722       1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"embed-certs-20211020000119-6808.16af94b2d3655926", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc053f58b5f9ed1ef, ext:403417950, loc:(*time.Location)(0x2d81340)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-embed-certs-20211020000119-6808", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"", Nam
e:"embed-certs-20211020000119-6808", UID:"embed-certs-20211020000119-6808", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "embed-certs-20211020000119-6808.16af94b2d3655926" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
	I1020 00:11:25.631224       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I1020 00:11:25.634399       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [f4aa16b4a5bc58c0900b91a8a9cdee591f91d53c4d0037655694e918f2b05666] <==
	* I1020 00:11:08.435696       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1020 00:11:08.435988       1 secure_serving.go:200] Serving securely on 127.0.0.1:10259
	I1020 00:11:08.436074       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E1020 00:11:08.436979       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1020 00:11:08.438758       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1020 00:11:08.439077       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1020 00:11:08.439162       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1020 00:11:08.439232       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1020 00:11:08.439314       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1020 00:11:08.439382       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1020 00:11:08.439438       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1020 00:11:08.439499       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1020 00:11:08.439560       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1020 00:11:08.439778       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1020 00:11:08.439908       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1020 00:11:08.439973       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1020 00:11:08.440028       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1020 00:11:08.440112       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1020 00:11:09.319084       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1020 00:11:09.431835       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1020 00:11:09.484559       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1020 00:11:09.484665       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1020 00:11:09.520294       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1020 00:11:09.562854       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I1020 00:11:09.935944       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2021-10-20 00:06:11 UTC, end at Wed 2021-10-20 00:11:53 UTC. --
	Oct 20 00:11:33 embed-certs-20211020000119-6808 kubelet[3723]: E1020 00:11:33.553544    3723 pod_workers.go:765] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-lhdc7_kubernetes-dashboard(68ecd0f7-06c3-4470-a39c-90bbfdd9bc32)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-lhdc7" podUID=68ecd0f7-06c3-4470-a39c-90bbfdd9bc32
	Oct 20 00:11:34 embed-certs-20211020000119-6808 kubelet[3723]: I1020 00:11:34.556614    3723 scope.go:110] "RemoveContainer" containerID="4730553631cb302673c4a3870c1e964ff5341db998aad13dc0ebbe9e16b8c2f5"
	Oct 20 00:11:34 embed-certs-20211020000119-6808 kubelet[3723]: E1020 00:11:34.556986    3723 pod_workers.go:765] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-lhdc7_kubernetes-dashboard(68ecd0f7-06c3-4470-a39c-90bbfdd9bc32)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-lhdc7" podUID=68ecd0f7-06c3-4470-a39c-90bbfdd9bc32
	Oct 20 00:11:36 embed-certs-20211020000119-6808 kubelet[3723]: I1020 00:11:36.725803    3723 scope.go:110] "RemoveContainer" containerID="4730553631cb302673c4a3870c1e964ff5341db998aad13dc0ebbe9e16b8c2f5"
	Oct 20 00:11:36 embed-certs-20211020000119-6808 kubelet[3723]: E1020 00:11:36.726083    3723 pod_workers.go:765] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-lhdc7_kubernetes-dashboard(68ecd0f7-06c3-4470-a39c-90bbfdd9bc32)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-lhdc7" podUID=68ecd0f7-06c3-4470-a39c-90bbfdd9bc32
	Oct 20 00:11:36 embed-certs-20211020000119-6808 kubelet[3723]: I1020 00:11:36.961437    3723 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/edf2e828-f91c-4099-880f-04cced3cf065-config-volume\") pod \"edf2e828-f91c-4099-880f-04cced3cf065\" (UID: \"edf2e828-f91c-4099-880f-04cced3cf065\") "
	Oct 20 00:11:36 embed-certs-20211020000119-6808 kubelet[3723]: I1020 00:11:36.961502    3723 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v298c\" (UniqueName: \"kubernetes.io/projected/edf2e828-f91c-4099-880f-04cced3cf065-kube-api-access-v298c\") pod \"edf2e828-f91c-4099-880f-04cced3cf065\" (UID: \"edf2e828-f91c-4099-880f-04cced3cf065\") "
	Oct 20 00:11:36 embed-certs-20211020000119-6808 kubelet[3723]: W1020 00:11:36.961750    3723 empty_dir.go:517] Warning: Failed to clear quota on /var/lib/kubelet/pods/edf2e828-f91c-4099-880f-04cced3cf065/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Oct 20 00:11:36 embed-certs-20211020000119-6808 kubelet[3723]: I1020 00:11:36.961920    3723 operation_generator.go:866] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edf2e828-f91c-4099-880f-04cced3cf065-config-volume" (OuterVolumeSpecName: "config-volume") pod "edf2e828-f91c-4099-880f-04cced3cf065" (UID: "edf2e828-f91c-4099-880f-04cced3cf065"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Oct 20 00:11:36 embed-certs-20211020000119-6808 kubelet[3723]: I1020 00:11:36.988198    3723 operation_generator.go:866] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edf2e828-f91c-4099-880f-04cced3cf065-kube-api-access-v298c" (OuterVolumeSpecName: "kube-api-access-v298c") pod "edf2e828-f91c-4099-880f-04cced3cf065" (UID: "edf2e828-f91c-4099-880f-04cced3cf065"). InnerVolumeSpecName "kube-api-access-v298c". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 20 00:11:37 embed-certs-20211020000119-6808 kubelet[3723]: I1020 00:11:37.062358    3723 reconciler.go:319] "Volume detached for volume \"kube-api-access-v298c\" (UniqueName: \"kubernetes.io/projected/edf2e828-f91c-4099-880f-04cced3cf065-kube-api-access-v298c\") on node \"embed-certs-20211020000119-6808\" DevicePath \"\""
	Oct 20 00:11:37 embed-certs-20211020000119-6808 kubelet[3723]: I1020 00:11:37.062400    3723 reconciler.go:319] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/edf2e828-f91c-4099-880f-04cced3cf065-config-volume\") on node \"embed-certs-20211020000119-6808\" DevicePath \"\""
	Oct 20 00:11:37 embed-certs-20211020000119-6808 kubelet[3723]: I1020 00:11:37.565728    3723 scope.go:110] "RemoveContainer" containerID="5b267fc588b14ba5760d1fa9658c03ce34eaae59a685d96ef491b6c4ba421bb7"
	Oct 20 00:11:37 embed-certs-20211020000119-6808 kubelet[3723]: I1020 00:11:37.573311    3723 scope.go:110] "RemoveContainer" containerID="5b267fc588b14ba5760d1fa9658c03ce34eaae59a685d96ef491b6c4ba421bb7"
	Oct 20 00:11:37 embed-certs-20211020000119-6808 kubelet[3723]: E1020 00:11:37.573754    3723 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5b267fc588b14ba5760d1fa9658c03ce34eaae59a685d96ef491b6c4ba421bb7\": not found" containerID="5b267fc588b14ba5760d1fa9658c03ce34eaae59a685d96ef491b6c4ba421bb7"
	Oct 20 00:11:37 embed-certs-20211020000119-6808 kubelet[3723]: I1020 00:11:37.573813    3723 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:containerd ID:5b267fc588b14ba5760d1fa9658c03ce34eaae59a685d96ef491b6c4ba421bb7} err="failed to get container status \"5b267fc588b14ba5760d1fa9658c03ce34eaae59a685d96ef491b6c4ba421bb7\": rpc error: code = NotFound desc = an error occurred when try to find container \"5b267fc588b14ba5760d1fa9658c03ce34eaae59a685d96ef491b6c4ba421bb7\": not found"
	Oct 20 00:11:38 embed-certs-20211020000119-6808 kubelet[3723]: I1020 00:11:38.340293    3723 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=edf2e828-f91c-4099-880f-04cced3cf065 path="/var/lib/kubelet/pods/edf2e828-f91c-4099-880f-04cced3cf065/volumes"
	Oct 20 00:11:42 embed-certs-20211020000119-6808 kubelet[3723]: E1020 00:11:42.387977    3723 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Oct 20 00:11:42 embed-certs-20211020000119-6808 kubelet[3723]: E1020 00:11:42.388032    3723 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Oct 20 00:11:42 embed-certs-20211020000119-6808 kubelet[3723]: E1020 00:11:42.388194    3723 kuberuntime_manager.go:895] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-z6hkc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler
{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]Vo
lumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-4qx7n_kube-system(2d5a6732-f488-4534-a9dc-6d0e07139c47): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host
	Oct 20 00:11:42 embed-certs-20211020000119-6808 kubelet[3723]: E1020 00:11:42.388255    3723 pod_workers.go:765] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-4qx7n" podUID=2d5a6732-f488-4534-a9dc-6d0e07139c47
	Oct 20 00:11:48 embed-certs-20211020000119-6808 kubelet[3723]: I1020 00:11:48.335327    3723 scope.go:110] "RemoveContainer" containerID="4730553631cb302673c4a3870c1e964ff5341db998aad13dc0ebbe9e16b8c2f5"
	Oct 20 00:11:49 embed-certs-20211020000119-6808 kubelet[3723]: I1020 00:11:49.592435    3723 scope.go:110] "RemoveContainer" containerID="4730553631cb302673c4a3870c1e964ff5341db998aad13dc0ebbe9e16b8c2f5"
	Oct 20 00:11:49 embed-certs-20211020000119-6808 kubelet[3723]: I1020 00:11:49.592729    3723 scope.go:110] "RemoveContainer" containerID="6bbb4c44400b641cc5dd9085550804112419b62d39d659e692e54febb76d61f5"
	Oct 20 00:11:49 embed-certs-20211020000119-6808 kubelet[3723]: E1020 00:11:49.593093    3723 pod_workers.go:765] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-lhdc7_kubernetes-dashboard(68ecd0f7-06c3-4470-a39c-90bbfdd9bc32)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-lhdc7" podUID=68ecd0f7-06c3-4470-a39c-90bbfdd9bc32
	
	* 
	* ==> kubernetes-dashboard [de8122a69ebdc8c738ab77d58119c3ff12b556235bc9a22f62e47a9682432b06] <==
	* 2021/10/20 00:11:28 Starting overwatch
	2021/10/20 00:11:28 Using namespace: kubernetes-dashboard
	2021/10/20 00:11:28 Using in-cluster config to connect to apiserver
	2021/10/20 00:11:28 Using secret token for csrf signing
	2021/10/20 00:11:28 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/10/20 00:11:28 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/10/20 00:11:28 Successful initial request to the apiserver, version: v1.22.2
	2021/10/20 00:11:28 Generating JWE encryption key
	2021/10/20 00:11:28 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/10/20 00:11:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/10/20 00:11:28 Initializing JWE encryption key from synchronized object
	2021/10/20 00:11:28 Creating in-cluster Sidecar client
	2021/10/20 00:11:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/10/20 00:11:28 Serving insecurely on HTTP port: 9090
	
	* 
	* ==> storage-provisioner [509c6eb7c34d77e290f5262c497542e1c2021b9517a4dd32c6b81c77b5096275] <==
	* I1020 00:11:27.242993       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1020 00:11:27.250547       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1020 00:11:27.250584       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1020 00:11:27.317471       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1020 00:11:27.317644       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-20211020000119-6808_d353d518-169b-4346-b91f-e15b4e84e8bc!
	I1020 00:11:27.317702       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a6721c57-b983-4598-9c9b-2172ad9bb06d", APIVersion:"v1", ResourceVersion:"579", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-20211020000119-6808_d353d518-169b-4346-b91f-e15b4e84e8bc became leader
	I1020 00:11:27.420177       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-20211020000119-6808_d353d518-169b-4346-b91f-e15b4e84e8bc!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20211020000119-6808 -n embed-certs-20211020000119-6808
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-20211020000119-6808 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-7c784ccb57-4qx7n
helpers_test.go:272: ======> post-mortem[TestStartStop/group/embed-certs/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context embed-certs-20211020000119-6808 describe pod metrics-server-7c784ccb57-4qx7n
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context embed-certs-20211020000119-6808 describe pod metrics-server-7c784ccb57-4qx7n: exit status 1 (86.22233ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-7c784ccb57-4qx7n" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context embed-certs-20211020000119-6808 describe pod metrics-server-7c784ccb57-4qx7n: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20211020000119-6808
helpers_test.go:235: (dbg) docker inspect embed-certs-20211020000119-6808:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "08410c61420ae260367c4880b0ef514ae27629d876de421fc77e26ab59b14e27",
	        "Created": "2021-10-20T00:04:43.629101344Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 240862,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-10-20T00:06:11.513950317Z",
	            "FinishedAt": "2021-10-20T00:06:08.613519797Z"
	        },
	        "Image": "sha256:e2a6c047beddf8261495222adf87089305bbc18e350587b01ebe3725535b5871",
	        "ResolvConfPath": "/var/lib/docker/containers/08410c61420ae260367c4880b0ef514ae27629d876de421fc77e26ab59b14e27/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/08410c61420ae260367c4880b0ef514ae27629d876de421fc77e26ab59b14e27/hostname",
	        "HostsPath": "/var/lib/docker/containers/08410c61420ae260367c4880b0ef514ae27629d876de421fc77e26ab59b14e27/hosts",
	        "LogPath": "/var/lib/docker/containers/08410c61420ae260367c4880b0ef514ae27629d876de421fc77e26ab59b14e27/08410c61420ae260367c4880b0ef514ae27629d876de421fc77e26ab59b14e27-json.log",
	        "Name": "/embed-certs-20211020000119-6808",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-20211020000119-6808:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20211020000119-6808",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/60a6fc7494748a3904e300171350692c442bcc83bd423828b0e77d99e28810a9-init/diff:/var/lib/docker/overlay2/c5f02e45537672705e6e2aba01e28e51c2d46e9e49272769c2467ad01e3a8798/diff:/var/lib/docker/overlay2/0ea8f075729b71baf4ccd823eb0c485ef5ef29bac0353c53e6065129629066ba/diff:/var/lib/docker/overlay2/88533b581a0009f254ebbf2a93788f6fea438104765bb8cd83e8242f6f007976/diff:/var/lib/docker/overlay2/14bd4aa948d82ba23130a19a3ed9b4398666fdb3b6961d42dfbea97f89c5cc8d/diff:/var/lib/docker/overlay2/c45d302040f8b5fa2a3863603e33a4a8945fc269c7c7d5b5fed4c0af74e1a318/diff:/var/lib/docker/overlay2/895bf38a811e31bbe8c73359865eb48ad82bbb675c072510dec392e208b25255/diff:/var/lib/docker/overlay2/888eda45177c40a5e8c507f72bfcd0e63d9fb3fbf18304e303b30f7858d75385/diff:/var/lib/docker/overlay2/1ba6be2d4915efd99d70838596bc306c3a5986d1687b7b5c82e3bcce08f2739a/diff:/var/lib/docker/overlay2/dba40492bd25835ae3cae4014121bd7743a6df6c6b6f568b50a45660d36da1ee/diff:/var/lib/docker/overlay2/f5e2e9
f4ecf2e4c242e9a2d0f40f87313ae18433e49a64f8e612bc21342ba35a/diff:/var/lib/docker/overlay2/55c3bd632609db28efc8e436188a29be66e40c950a2f07ba9a636848bcf8fe54/diff:/var/lib/docker/overlay2/e743e9a12912c11573135dbbc7915737001fecbbddc57c8609bb97c48e477ab5/diff:/var/lib/docker/overlay2/0d2ab28eea343e33cf288e808e3583a4ef3a2d0f0e152cb4b6a696fe2829aa37/diff:/var/lib/docker/overlay2/a9d9379f41d6df50597861e1eca41e6abb25962190c7c4bf07f9b6cecd398ab8/diff:/var/lib/docker/overlay2/246f3a84f3d47634be292753e6ab114ff0fdd6c19174f98d20d3631b985bd65a/diff:/var/lib/docker/overlay2/1555899ff98b2aefb432f4f7413f555e1419a0c7f1ff8906ff527987be55f98f/diff:/var/lib/docker/overlay2/2046d76f6555966732657251ae8f140fcb5c9ba7becf9f4ea2f38d02553b81a8/diff:/var/lib/docker/overlay2/45481a57311300993df1fbf7c1affd06d3f1b0b4f3d4786b38e20ed6419a53d4/diff:/var/lib/docker/overlay2/3e461aa0d9087cb03e88430b7acc263fdb5b7c9685b89e24b2e636c0b764ff95/diff:/var/lib/docker/overlay2/fd0a889ad13f00ae1f69b1916a5927927a49c38948b00d54db2145eed106176b/diff:/var/lib/d
ocker/overlay2/b9ba91322e89a003f05f9d5f11671de5af7cda4550b6f60c9d4ff90724e4e6f9/diff:/var/lib/docker/overlay2/e778c4cc31a31336a7ba77c6088ba8748d9c140197750d9e6c90b127c01e530b/diff:/var/lib/docker/overlay2/95fbf604ebdd8cc367f642230ac0a68cbc72994b41f02db95f354c4fdc75af24/diff:/var/lib/docker/overlay2/3f2110c7a6beb28d94b23699e2ca6363aa4ea63d43035d2e539e47fcd331a2ba/diff:/var/lib/docker/overlay2/1bf0ef7d7a45ab3ea27aa3d2f6b34dce6e59fc18e7e464e718352e8ea7029fd8/diff:/var/lib/docker/overlay2/23c70a1791e26b5ae01282413205d77ba4ecd95b3e170ea40176e856715a9359/diff:/var/lib/docker/overlay2/d830811487929eab10415090b3d97a6d7865d450267b84aa4184392814d8b1df/diff:/var/lib/docker/overlay2/b36fdfbf0198d3321dcce07dba3a6910f2cbe2dc276d6530613f56cd94afcc31/diff:/var/lib/docker/overlay2/cc70ad4a43336c6e67e06e2c9e0a2f30140594f1db80424144ea2b077c4ab1af/diff:/var/lib/docker/overlay2/d728e7526151cbb5fa43fbde71e65b654ce3b44983512cd2e690c8798d0d3ef8/diff:/var/lib/docker/overlay2/d701d06f30bc941260abb1757ee832c9aa2195a14f1552bd9d4baf5e614
4d5ad/diff:/var/lib/docker/overlay2/6bd5ed3108886521103186b5318e03a8d90f0dbc80d27990929d83cd889e533d/diff:/var/lib/docker/overlay2/e27df6450133558da7555ef518af1a37ae6a07ce47c6767fec076ca914b56d50/diff:/var/lib/docker/overlay2/f820866aa600bd3aaac0b53c24ed1df712be2f6192e572cb551d67d8ed5f2ae8/diff:/var/lib/docker/overlay2/867ec5e44c5a5ca3a4bbcb5e3ebe23023fd7c7f42cc297619dd37fa19e58a4ce/diff:/var/lib/docker/overlay2/1de77d9dd95b7ad0b72d0e674dbf02cf40000662dc54842d9d003febd75e9183/diff:/var/lib/docker/overlay2/8249f5dae9e196bbeeca2cec8401a9bd3dc73ea9eb3884270fea296047dc72e1/diff:/var/lib/docker/overlay2/d1a43bd2c1485a2051100b28c50ca4afb530e7a9cace2b7ed1bb19098a8b1b6c/diff:/var/lib/docker/overlay2/e5626256f4126d2d314b1737c78f12ceabf819f05f933b8539d23c83ed360571/diff:/var/lib/docker/overlay2/0e28b1b6d42bc8ec33754e6a4d94556573199f71a1745d89b48ecf4e53c4b9d7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/60a6fc7494748a3904e300171350692c442bcc83bd423828b0e77d99e28810a9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/60a6fc7494748a3904e300171350692c442bcc83bd423828b0e77d99e28810a9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/60a6fc7494748a3904e300171350692c442bcc83bd423828b0e77d99e28810a9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20211020000119-6808",
	                "Source": "/var/lib/docker/volumes/embed-certs-20211020000119-6808/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20211020000119-6808",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.27-1633027942-12081@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20211020000119-6808",
	                "name.minikube.sigs.k8s.io": "embed-certs-20211020000119-6808",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1881fce6e1679a94d100bbef96cbc2b75b2a4c23759c2e76ac2bacde1b7d8285",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33030"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33029"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33026"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33028"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33027"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1881fce6e167",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20211020000119-6808": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "08410c61420a"
	                    ],
	                    "NetworkID": "470d197350e46e808401c61690f42374e6568e699bc44af734e51d09c76bf9a8",
	                    "EndpointID": "fc908795c7033ede9437e91166114d4e30e7cfe4c7a5c743ae27cb505235a271",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20211020000119-6808 -n embed-certs-20211020000119-6808
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-20211020000119-6808 logs -n 25
helpers_test.go:252: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                    Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| start   | -p                                                | embed-certs-20211020000119-6808                | jenkins | v1.23.2 | Wed, 20 Oct 2021 00:01:19 UTC | Wed, 20 Oct 2021 00:05:38 UTC |
	|         | embed-certs-20211020000119-6808                   |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                                |         |         |                               |                               |
	|         | --driver=docker                                   |                                                |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.2                      |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | embed-certs-20211020000119-6808                | jenkins | v1.23.2 | Wed, 20 Oct 2021 00:05:47 UTC | Wed, 20 Oct 2021 00:05:48 UTC |
	|         | embed-certs-20211020000119-6808                   |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |         |                               |                               |
	| stop    | -p                                                | embed-certs-20211020000119-6808                | jenkins | v1.23.2 | Wed, 20 Oct 2021 00:05:48 UTC | Wed, 20 Oct 2021 00:06:09 UTC |
	|         | embed-certs-20211020000119-6808                   |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                               | embed-certs-20211020000119-6808                | jenkins | v1.23.2 | Wed, 20 Oct 2021 00:06:09 UTC | Wed, 20 Oct 2021 00:06:09 UTC |
	|         | embed-certs-20211020000119-6808                   |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                               |                               |
	| start   | -p no-preload-20211020000033-6808                 | no-preload-20211020000033-6808                 | jenkins | v1.23.2 | Wed, 20 Oct 2021 00:02:36 UTC | Wed, 20 Oct 2021 00:08:03 UTC |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                                |         |         |                               |                               |
	|         | --driver=docker                                   |                                                |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.3-rc.0                 |                                                |         |         |                               |                               |
	| ssh     | -p                                                | no-preload-20211020000033-6808                 | jenkins | v1.23.2 | Wed, 20 Oct 2021 00:08:14 UTC | Wed, 20 Oct 2021 00:08:14 UTC |
	|         | no-preload-20211020000033-6808                    |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                |         |         |                               |                               |
	| pause   | -p                                                | no-preload-20211020000033-6808                 | jenkins | v1.23.2 | Wed, 20 Oct 2021 00:08:14 UTC | Wed, 20 Oct 2021 00:08:15 UTC |
	|         | no-preload-20211020000033-6808                    |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=1                            |                                                |         |         |                               |                               |
	| unpause | -p                                                | no-preload-20211020000033-6808                 | jenkins | v1.23.2 | Wed, 20 Oct 2021 00:08:15 UTC | Wed, 20 Oct 2021 00:08:16 UTC |
	|         | no-preload-20211020000033-6808                    |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=1                            |                                                |         |         |                               |                               |
	| delete  | -p                                                | no-preload-20211020000033-6808                 | jenkins | v1.23.2 | Wed, 20 Oct 2021 00:08:17 UTC | Wed, 20 Oct 2021 00:08:20 UTC |
	|         | no-preload-20211020000033-6808                    |                                                |         |         |                               |                               |
	| delete  | -p                                                | no-preload-20211020000033-6808                 | jenkins | v1.23.2 | Wed, 20 Oct 2021 00:08:21 UTC | Wed, 20 Oct 2021 00:08:21 UTC |
	|         | no-preload-20211020000033-6808                    |                                                |         |         |                               |                               |
	| start   | -p                                                | old-k8s-version-20211020000023-6808            | jenkins | v1.23.2 | Wed, 20 Oct 2021 00:02:45 UTC | Wed, 20 Oct 2021 00:10:20 UTC |
	|         | old-k8s-version-20211020000023-6808               |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                 |                                                |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                |         |         |                               |                               |
	|         | --disable-driver-mounts                           |                                                |         |         |                               |                               |
	|         | --keep-context=false                              |                                                |         |         |                               |                               |
	|         | --driver=docker                                   |                                                |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                                |         |         |                               |                               |
	| start   | -p                                                | default-k8s-different-port-20211020000323-6808 | jenkins | v1.23.2 | Wed, 20 Oct 2021 00:04:56 UTC | Wed, 20 Oct 2021 00:10:23 UTC |
	|         | default-k8s-different-port-20211020000323-6808    |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                |         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=docker             |                                                |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.2                      |                                                |         |         |                               |                               |
	| ssh     | -p                                                | old-k8s-version-20211020000023-6808            | jenkins | v1.23.2 | Wed, 20 Oct 2021 00:10:31 UTC | Wed, 20 Oct 2021 00:10:31 UTC |
	|         | old-k8s-version-20211020000023-6808               |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                |         |         |                               |                               |
	| pause   | -p                                                | old-k8s-version-20211020000023-6808            | jenkins | v1.23.2 | Wed, 20 Oct 2021 00:10:31 UTC | Wed, 20 Oct 2021 00:10:32 UTC |
	|         | old-k8s-version-20211020000023-6808               |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=1                            |                                                |         |         |                               |                               |
	| unpause | -p                                                | old-k8s-version-20211020000023-6808            | jenkins | v1.23.2 | Wed, 20 Oct 2021 00:10:33 UTC | Wed, 20 Oct 2021 00:10:33 UTC |
	|         | old-k8s-version-20211020000023-6808               |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=1                            |                                                |         |         |                               |                               |
	| ssh     | -p                                                | default-k8s-different-port-20211020000323-6808 | jenkins | v1.23.2 | Wed, 20 Oct 2021 00:10:34 UTC | Wed, 20 Oct 2021 00:10:34 UTC |
	|         | default-k8s-different-port-20211020000323-6808    |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                |         |         |                               |                               |
	| pause   | -p                                                | default-k8s-different-port-20211020000323-6808 | jenkins | v1.23.2 | Wed, 20 Oct 2021 00:10:34 UTC | Wed, 20 Oct 2021 00:10:35 UTC |
	|         | default-k8s-different-port-20211020000323-6808    |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=1                            |                                                |         |         |                               |                               |
	| unpause | -p                                                | default-k8s-different-port-20211020000323-6808 | jenkins | v1.23.2 | Wed, 20 Oct 2021 00:10:36 UTC | Wed, 20 Oct 2021 00:10:37 UTC |
	|         | default-k8s-different-port-20211020000323-6808    |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=1                            |                                                |         |         |                               |                               |
	| delete  | -p                                                | old-k8s-version-20211020000023-6808            | jenkins | v1.23.2 | Wed, 20 Oct 2021 00:10:34 UTC | Wed, 20 Oct 2021 00:10:37 UTC |
	|         | old-k8s-version-20211020000023-6808               |                                                |         |         |                               |                               |
	| delete  | -p                                                | old-k8s-version-20211020000023-6808            | jenkins | v1.23.2 | Wed, 20 Oct 2021 00:10:38 UTC | Wed, 20 Oct 2021 00:10:38 UTC |
	|         | old-k8s-version-20211020000023-6808               |                                                |         |         |                               |                               |
	| delete  | -p                                                | default-k8s-different-port-20211020000323-6808 | jenkins | v1.23.2 | Wed, 20 Oct 2021 00:10:38 UTC | Wed, 20 Oct 2021 00:10:41 UTC |
	|         | default-k8s-different-port-20211020000323-6808    |                                                |         |         |                               |                               |
	| delete  | -p                                                | default-k8s-different-port-20211020000323-6808 | jenkins | v1.23.2 | Wed, 20 Oct 2021 00:10:41 UTC | Wed, 20 Oct 2021 00:10:41 UTC |
	|         | default-k8s-different-port-20211020000323-6808    |                                                |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20211020000119-6808                | jenkins | v1.23.2 | Wed, 20 Oct 2021 00:06:09 UTC | Wed, 20 Oct 2021 00:11:39 UTC |
	|         | embed-certs-20211020000119-6808                   |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                                |         |         |                               |                               |
	|         | --driver=docker                                   |                                                |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.2                      |                                                |         |         |                               |                               |
	| ssh     | -p                                                | embed-certs-20211020000119-6808                | jenkins | v1.23.2 | Wed, 20 Oct 2021 00:11:50 UTC | Wed, 20 Oct 2021 00:11:50 UTC |
	|         | embed-certs-20211020000119-6808                   |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                |         |         |                               |                               |
	| -p      | embed-certs-20211020000119-6808                   | embed-certs-20211020000119-6808                | jenkins | v1.23.2 | Wed, 20 Oct 2021 00:11:53 UTC | Wed, 20 Oct 2021 00:11:53 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	|---------|---------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/10/20 00:06:09
	Running on machine: debian-jenkins-agent-14
	Binary: Built with gc go1.17.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1020 00:06:09.302430  240589 out.go:297] Setting OutFile to fd 1 ...
	I1020 00:06:09.302620  240589 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1020 00:06:09.302629  240589 out.go:310] Setting ErrFile to fd 2...
	I1020 00:06:09.302633  240589 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1020 00:06:09.302720  240589 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/bin
	I1020 00:06:09.302914  240589 out.go:304] Setting JSON to false
	I1020 00:06:09.339174  240589 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":6531,"bootTime":1634681838,"procs":343,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I1020 00:06:09.339278  240589 start.go:121] virtualization: kvm guest
	I1020 00:06:09.341804  240589 out.go:176] * [embed-certs-20211020000119-6808] minikube v1.23.2 on Debian 9.13 (kvm/amd64)
	I1020 00:06:09.343407  240589 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/kubeconfig
	I1020 00:06:09.341973  240589 notify.go:169] Checking for updates...
	I1020 00:06:09.344720  240589 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1020 00:06:09.346112  240589 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube
	I1020 00:06:09.347535  240589 out.go:176]   - MINIKUBE_LOCATION=12739
	I1020 00:06:09.347945  240589 config.go:176] Loaded profile config "embed-certs-20211020000119-6808": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.2
	I1020 00:06:09.348356  240589 driver.go:343] Setting default libvirt URI to qemu:///system
	I1020 00:06:09.404187  240589 docker.go:132] docker version: linux-19.03.15
	I1020 00:06:09.404270  240589 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1020 00:06:09.487171  240589 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:188 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:true NGoroutines:50 SystemTime:2021-10-20 00:06:09.441923152 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1020 00:06:09.487249  240589 docker.go:237] overlay module found
	I1020 00:06:09.489177  240589 out.go:176] * Using the docker driver based on existing profile
	I1020 00:06:09.489205  240589 start.go:278] selected driver: docker
	I1020 00:06:09.489212  240589 start.go:751] validating driver "docker" against &{Name:embed-certs-20211020000119-6808 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.27-1633027942-12081@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.2 ClusterName:embed-certs-20211020000119-6808 Namespace:default APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.22.2 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s
ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host}
	I1020 00:06:09.489333  240589 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W1020 00:06:09.489374  240589 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W1020 00:06:09.489397  240589 out.go:241] ! Your cgroup does not allow setting memory.
	I1020 00:06:09.491002  240589 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I1020 00:06:09.491875  240589 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1020 00:06:09.572869  240589 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:188 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:true NGoroutines:50 SystemTime:2021-10-20 00:06:09.5291671 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddres
s:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warning
s:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	W1020 00:06:09.572991  240589 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W1020 00:06:09.573022  240589 out.go:241] ! Your cgroup does not allow setting memory.
	I1020 00:06:09.575120  240589 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I1020 00:06:09.575228  240589 start_flags.go:748] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 00:06:09.575253  240589 cni.go:93] Creating CNI manager for ""
	I1020 00:06:09.575263  240589 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1020 00:06:09.575276  240589 start_flags.go:280] config:
	{Name:embed-certs-20211020000119-6808 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.27-1633027942-12081@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.2 ClusterName:embed-certs-20211020000119-6808 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.22.2 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host}
	I1020 00:06:09.577194  240589 out.go:176] * Starting control plane node embed-certs-20211020000119-6808 in cluster embed-certs-20211020000119-6808
	I1020 00:06:09.577227  240589 cache.go:118] Beginning downloading kic base image for docker with containerd
	I1020 00:06:09.578744  240589 out.go:176] * Pulling base image ...
	I1020 00:06:09.578776  240589 preload.go:130] Checking if preload exists for k8s version v1.22.2 and runtime containerd
	I1020 00:06:09.578811  240589 preload.go:146] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v13-v1.22.2-containerd-overlay2-amd64.tar.lz4
	I1020 00:06:09.578826  240589 cache.go:57] Caching tarball of preloaded images
	I1020 00:06:09.578876  240589 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.27-1633027942-12081@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1020 00:06:09.579087  240589 preload.go:172] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v13-v1.22.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1020 00:06:09.579105  240589 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.2 on containerd
	I1020 00:06:09.579259  240589 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/embed-certs-20211020000119-6808/config.json ...
	I1020 00:06:09.681469  240589 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.27-1633027942-12081@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1020 00:06:09.681494  240589 cache.go:140] gcr.io/k8s-minikube/kicbase-builds:v0.0.27-1633027942-12081@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1020 00:06:09.681511  240589 cache.go:206] Successfully downloaded all kic artifacts
	I1020 00:06:09.681543  240589 start.go:313] acquiring machines lock for embed-certs-20211020000119-6808: {Name:mk7187c9ae4df6d41aa7c4922b992dfb2ef12393 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 00:06:09.681652  240589 start.go:317] acquired machines lock for "embed-certs-20211020000119-6808" in 64.93µs
	I1020 00:06:09.681678  240589 start.go:93] Skipping create...Using existing machine configuration
	I1020 00:06:09.681682  240589 fix.go:55] fixHost starting: 
	I1020 00:06:09.681925  240589 cli_runner.go:115] Run: docker container inspect embed-certs-20211020000119-6808 --format={{.State.Status}}
	I1020 00:06:09.721280  240589 fix.go:108] recreateIfNeeded on embed-certs-20211020000119-6808: state=Stopped err=<nil>
	W1020 00:06:09.721309  240589 fix.go:134] unexpected machine state, will restart: <nil>
	I1020 00:06:06.010824  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:08.510799  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:07.397812  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:09.398009  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:08.345903  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:10.847006  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:09.723989  240589 out.go:176] * Restarting existing docker container for "embed-certs-20211020000119-6808" ...
	I1020 00:06:09.724048  240589 cli_runner.go:115] Run: docker start embed-certs-20211020000119-6808
	I1020 00:06:11.522123  240589 cli_runner.go:168] Completed: docker start embed-certs-20211020000119-6808: (1.798049644s)
	I1020 00:06:11.522197  240589 cli_runner.go:115] Run: docker container inspect embed-certs-20211020000119-6808 --format={{.State.Status}}
	I1020 00:06:11.565365  240589 kic.go:420] container "embed-certs-20211020000119-6808" state is running.
	I1020 00:06:11.565737  240589 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20211020000119-6808
	I1020 00:06:11.607142  240589 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/embed-certs-20211020000119-6808/config.json ...
	I1020 00:06:11.607366  240589 machine.go:88] provisioning docker machine ...
	I1020 00:06:11.607392  240589 ubuntu.go:169] provisioning hostname "embed-certs-20211020000119-6808"
	I1020 00:06:11.607446  240589 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211020000119-6808
	I1020 00:06:11.648830  240589 main.go:130] libmachine: Using SSH client type: native
	I1020 00:06:11.649014  240589 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0b00] 0x7a3be0 <nil>  [] 0s} 127.0.0.1 33030 <nil> <nil>}
	I1020 00:06:11.649033  240589 main.go:130] libmachine: About to run SSH command:
	sudo hostname embed-certs-20211020000119-6808 && echo "embed-certs-20211020000119-6808" | sudo tee /etc/hostname
	I1020 00:06:11.649483  240589 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58498->127.0.0.1:33030: read: connection reset by peer
	I1020 00:06:11.009995  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:13.010500  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:15.010983  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:11.398393  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:13.898112  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:13.346550  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:15.346980  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:14.763882  240589 main.go:130] libmachine: SSH cmd err, output: <nil>: embed-certs-20211020000119-6808
	
	I1020 00:06:14.763953  240589 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211020000119-6808
	I1020 00:06:14.803082  240589 main.go:130] libmachine: Using SSH client type: native
	I1020 00:06:14.803220  240589 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0b00] 0x7a3be0 <nil>  [] 0s} 127.0.0.1 33030 <nil> <nil>}
	I1020 00:06:14.803242  240589 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20211020000119-6808' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20211020000119-6808/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20211020000119-6808' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1020 00:06:14.915281  240589 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I1020 00:06:14.915314  240589 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube}
	I1020 00:06:14.915333  240589 ubuntu.go:177] setting up certificates
	I1020 00:06:14.915343  240589 provision.go:83] configureAuth start
	I1020 00:06:14.915385  240589 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20211020000119-6808
	I1020 00:06:14.955130  240589 provision.go:138] copyHostCerts
	I1020 00:06:14.955216  240589 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/ca.pem, removing ...
	I1020 00:06:14.955229  240589 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/ca.pem
	I1020 00:06:14.955305  240589 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/ca.pem (1082 bytes)
	I1020 00:06:14.955401  240589 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cert.pem, removing ...
	I1020 00:06:14.955413  240589 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cert.pem
	I1020 00:06:14.955445  240589 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cert.pem (1123 bytes)
	I1020 00:06:14.955503  240589 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/key.pem, removing ...
	I1020 00:06:14.955511  240589 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/key.pem
	I1020 00:06:14.955538  240589 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/key.pem (1679 bytes)
	I1020 00:06:14.955592  240589 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20211020000119-6808 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20211020000119-6808]
	I1020 00:06:15.115907  240589 provision.go:172] copyRemoteCerts
	I1020 00:06:15.115958  240589 ssh_runner.go:152] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1020 00:06:15.115988  240589 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211020000119-6808
	I1020 00:06:15.159903  240589 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33030 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/embed-certs-20211020000119-6808/id_rsa Username:docker}
	I1020 00:06:15.278849  240589 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1020 00:06:15.295278  240589 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I1020 00:06:15.311492  240589 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1020 00:06:15.327812  240589 provision.go:86] duration metric: configureAuth took 412.459086ms
	I1020 00:06:15.327868  240589 ubuntu.go:193] setting minikube options for container-runtime
	I1020 00:06:15.328021  240589 config.go:176] Loaded profile config "embed-certs-20211020000119-6808": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.2
	I1020 00:06:15.328033  240589 machine.go:91] provisioned docker machine in 3.72065274s
	I1020 00:06:15.328040  240589 start.go:267] post-start starting for "embed-certs-20211020000119-6808" (driver="docker")
	I1020 00:06:15.328045  240589 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1020 00:06:15.328080  240589 ssh_runner.go:152] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1020 00:06:15.328123  240589 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211020000119-6808
	I1020 00:06:15.368400  240589 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33030 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/embed-certs-20211020000119-6808/id_rsa Username:docker}
	I1020 00:06:15.446897  240589 ssh_runner.go:152] Run: cat /etc/os-release
	I1020 00:06:15.449422  240589 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1020 00:06:15.449444  240589 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1020 00:06:15.449452  240589 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1020 00:06:15.449457  240589 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I1020 00:06:15.449464  240589 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/addons for local assets ...
	I1020 00:06:15.449507  240589 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/files for local assets ...
	I1020 00:06:15.449570  240589 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/files/etc/ssl/certs/68082.pem -> 68082.pem in /etc/ssl/certs
	I1020 00:06:15.449643  240589 ssh_runner.go:152] Run: sudo mkdir -p /etc/ssl/certs
	I1020 00:06:15.455856  240589 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/files/etc/ssl/certs/68082.pem --> /etc/ssl/certs/68082.pem (1708 bytes)
	I1020 00:06:15.471499  240589 start.go:270] post-start completed in 143.450063ms
	I1020 00:06:15.471550  240589 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1020 00:06:15.471583  240589 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211020000119-6808
	I1020 00:06:15.512337  240589 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33030 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/embed-certs-20211020000119-6808/id_rsa Username:docker}
	I1020 00:06:15.591886  240589 fix.go:57] fixHost completed within 5.910199097s
	I1020 00:06:15.591910  240589 start.go:80] releasing machines lock for "embed-certs-20211020000119-6808", held for 5.910246637s
	I1020 00:06:15.591976  240589 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20211020000119-6808
	I1020 00:06:15.632809  240589 ssh_runner.go:152] Run: systemctl --version
	I1020 00:06:15.632828  240589 ssh_runner.go:152] Run: curl -sS -m 2 https://k8s.gcr.io/
	I1020 00:06:15.632865  240589 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211020000119-6808
	I1020 00:06:15.632872  240589 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211020000119-6808
	I1020 00:06:15.674281  240589 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33030 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/embed-certs-20211020000119-6808/id_rsa Username:docker}
	I1020 00:06:15.676062  240589 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33030 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/embed-certs-20211020000119-6808/id_rsa Username:docker}
	I1020 00:06:15.771657  240589 ssh_runner.go:152] Run: sudo systemctl stop -f crio
	I1020 00:06:15.781776  240589 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service crio
	I1020 00:06:15.790100  240589 docker.go:156] disabling docker service ...
	I1020 00:06:15.790144  240589 ssh_runner.go:152] Run: sudo systemctl stop -f docker.socket
	I1020 00:06:15.804968  240589 ssh_runner.go:152] Run: sudo systemctl stop -f docker.service
	I1020 00:06:15.813315  240589 ssh_runner.go:152] Run: sudo systemctl disable docker.socket
	I1020 00:06:15.879518  240589 ssh_runner.go:152] Run: sudo systemctl mask docker.service
	I1020 00:06:15.938852  240589 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service docker
	I1020 00:06:15.947547  240589 ssh_runner.go:152] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1020 00:06:15.959658  240589 ssh_runner.go:152] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwcm94eV9wbHVnaW5zXQojIGZ1c2Utb3ZlcmxheWZzIGlzIHVzZWQgZm9yIHJvb3RsZXNzCltwcm94eV9wbHVnaW5zLiJmdXNlLW92ZXJsYXlmcyJdCiAgdHlwZSA9ICJzbmFwc2hvdCIKICBhZGRyZXNzID0gIi9ydW4vY29udGFpbmVyZC1mdXNlLW92ZXJsYXlmcy5zb2NrIgoKW3BsdWdpbnNdCiAgW3BsdWdpbnMuY2dyb3Vwc10KICAgIG5vX3Byb21ldGhldXMgPSBmYWxzZQogIFtwbHVnaW5zLmNyaV0KICAgIHN0cmVhbV9zZXJ2ZXJfYWRkcmVzcyA9ICIiCiAgICBzdHJlYW1fc2VydmVyX3BvcnQgPSAiMTAwMTAiCiAgICBlbmFibGVfc2VsaW51eCA9IGZhbHNlCiAgICBzY
W5kYm94X2ltYWdlID0gIms4cy5nY3IuaW8vcGF1c2U6My41IgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKICAgIHJlc3RyaWN0X29vbV9zY29yZV9hZGogPSBmYWxzZQoKCVtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0KICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmRdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXNdCiAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jXQogICAgICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdCiAgICAgICAgICAgICAgU3lzdGVtZENncm91cCA9IGZhbHNlCgogICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmRdCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLnVud
HJ1c3RlZF93b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICBbcGx1Z2lucy5jcmkuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0Lm1rIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuZGlmZi1zZXJ2aWNlXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuc2NoZWR1bGVyXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I1020 00:06:15.971992  240589 ssh_runner.go:152] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1020 00:06:15.977723  240589 crio.go:137] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1020 00:06:15.977768  240589 ssh_runner.go:152] Run: sudo modprobe br_netfilter
	I1020 00:06:15.984350  240589 ssh_runner.go:152] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1020 00:06:15.990350  240589 ssh_runner.go:152] Run: sudo systemctl daemon-reload
	I1020 00:06:16.050094  240589 ssh_runner.go:152] Run: sudo systemctl restart containerd
	I1020 00:06:16.116825  240589 start.go:393] Will wait 60s for socket path /run/containerd/containerd.sock
	I1020 00:06:16.116897  240589 ssh_runner.go:152] Run: stat /run/containerd/containerd.sock
	I1020 00:06:16.120659  240589 start.go:414] Will wait 60s for crictl version
	I1020 00:06:16.120705  240589 ssh_runner.go:152] Run: sudo crictl version
	I1020 00:06:16.144288  240589 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-10-20T00:06:16Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I1020 00:06:17.510240  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:19.510451  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:16.397770  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:18.397987  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:20.897395  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:17.845906  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:19.847301  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:22.009986  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:24.010345  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:22.897480  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:25.397498  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:22.345565  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:24.345988  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:27.192504  240589 ssh_runner.go:152] Run: sudo crictl version
	I1020 00:06:27.224058  240589 start.go:423] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.9
	RuntimeApiVersion:  v1alpha2
	I1020 00:06:27.224125  240589 ssh_runner.go:152] Run: containerd --version
	I1020 00:06:27.247660  240589 ssh_runner.go:152] Run: containerd --version
	I1020 00:06:27.271629  240589 out.go:176] * Preparing Kubernetes v1.22.2 on containerd 1.4.9 ...
	I1020 00:06:27.271708  240589 cli_runner.go:115] Run: docker network inspect embed-certs-20211020000119-6808 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1020 00:06:27.310346  240589 ssh_runner.go:152] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1020 00:06:27.313566  240589 ssh_runner.go:152] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 00:06:27.322677  240589 preload.go:130] Checking if preload exists for k8s version v1.22.2 and runtime containerd
	I1020 00:06:27.322729  240589 ssh_runner.go:152] Run: sudo crictl images --output json
	I1020 00:06:27.346327  240589 containerd.go:635] all images are preloaded for containerd runtime.
	I1020 00:06:27.346346  240589 containerd.go:539] Images already preloaded, skipping extraction
	I1020 00:06:27.346385  240589 ssh_runner.go:152] Run: sudo crictl images --output json
	I1020 00:06:27.368362  240589 containerd.go:635] all images are preloaded for containerd runtime.
	I1020 00:06:27.368384  240589 cache_images.go:79] Images are preloaded, skipping loading
	I1020 00:06:27.368436  240589 ssh_runner.go:152] Run: sudo crictl info
	I1020 00:06:27.391015  240589 cni.go:93] Creating CNI manager for ""
	I1020 00:06:27.391040  240589 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1020 00:06:27.391054  240589 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1020 00:06:27.391069  240589 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.22.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20211020000119-6808 NodeName:embed-certs-20211020000119-6808 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientC
AFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I1020 00:06:27.391228  240589 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "embed-certs-20211020000119-6808"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.22.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1020 00:06:27.391352  240589 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.22.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=embed-certs-20211020000119-6808 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.22.2 ClusterName:embed-certs-20211020000119-6808 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1020 00:06:27.391412  240589 ssh_runner.go:152] Run: sudo ls /var/lib/minikube/binaries/v1.22.2
	I1020 00:06:27.398607  240589 binaries.go:44] Found k8s binaries, skipping transfer
	I1020 00:06:27.398666  240589 ssh_runner.go:152] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1020 00:06:27.405263  240589 ssh_runner.go:319] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (576 bytes)
	I1020 00:06:27.417352  240589 ssh_runner.go:319] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1020 00:06:27.429311  240589 ssh_runner.go:319] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2081 bytes)
	I1020 00:06:27.441088  240589 ssh_runner.go:152] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1020 00:06:27.443741  240589 ssh_runner.go:152] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 00:06:27.451934  240589 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/embed-certs-20211020000119-6808 for IP: 192.168.58.2
	I1020 00:06:27.452034  240589 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/ca.key
	I1020 00:06:27.452066  240589 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/proxy-client-ca.key
	I1020 00:06:27.452131  240589 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/embed-certs-20211020000119-6808/client.key
	I1020 00:06:27.452179  240589 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/embed-certs-20211020000119-6808/apiserver.key.cee25041
	I1020 00:06:27.452210  240589 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/embed-certs-20211020000119-6808/proxy-client.key
	I1020 00:06:27.452315  240589 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/6808.pem (1338 bytes)
	W1020 00:06:27.452342  240589 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/6808_empty.pem, impossibly tiny 0 bytes
	I1020 00:06:27.452352  240589 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/ca-key.pem (1679 bytes)
	I1020 00:06:27.452378  240589 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/ca.pem (1082 bytes)
	I1020 00:06:27.452400  240589 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/cert.pem (1123 bytes)
	I1020 00:06:27.452420  240589 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/key.pem (1679 bytes)
	I1020 00:06:27.452470  240589 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/files/etc/ssl/certs/68082.pem (1708 bytes)
	I1020 00:06:27.453404  240589 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/embed-certs-20211020000119-6808/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1020 00:06:27.468991  240589 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/embed-certs-20211020000119-6808/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1020 00:06:27.484431  240589 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/embed-certs-20211020000119-6808/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1020 00:06:27.500560  240589 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/embed-certs-20211020000119-6808/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1020 00:06:27.516625  240589 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1020 00:06:27.532300  240589 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1020 00:06:27.547994  240589 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1020 00:06:27.563524  240589 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1020 00:06:27.578941  240589 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/certs/6808.pem --> /usr/share/ca-certificates/6808.pem (1338 bytes)
	I1020 00:06:27.595429  240589 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/files/etc/ssl/certs/68082.pem --> /usr/share/ca-certificates/68082.pem (1708 bytes)
	I1020 00:06:27.610865  240589 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1020 00:06:27.626160  240589 ssh_runner.go:319] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1020 00:06:27.637664  240589 ssh_runner.go:152] Run: openssl version
	I1020 00:06:27.642063  240589 ssh_runner.go:152] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6808.pem && ln -fs /usr/share/ca-certificates/6808.pem /etc/ssl/certs/6808.pem"
	I1020 00:06:27.648693  240589 ssh_runner.go:152] Run: ls -la /usr/share/ca-certificates/6808.pem
	I1020 00:06:27.651418  240589 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Oct 19 23:29 /usr/share/ca-certificates/6808.pem
	I1020 00:06:27.651450  240589 ssh_runner.go:152] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6808.pem
	I1020 00:06:27.656039  240589 ssh_runner.go:152] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6808.pem /etc/ssl/certs/51391683.0"
	I1020 00:06:27.662349  240589 ssh_runner.go:152] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/68082.pem && ln -fs /usr/share/ca-certificates/68082.pem /etc/ssl/certs/68082.pem"
	I1020 00:06:27.669076  240589 ssh_runner.go:152] Run: ls -la /usr/share/ca-certificates/68082.pem
	I1020 00:06:27.671878  240589 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Oct 19 23:29 /usr/share/ca-certificates/68082.pem
	I1020 00:06:27.671919  240589 ssh_runner.go:152] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/68082.pem
	I1020 00:06:27.676445  240589 ssh_runner.go:152] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/68082.pem /etc/ssl/certs/3ec20f2e.0"
	I1020 00:06:27.682789  240589 ssh_runner.go:152] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1020 00:06:27.689585  240589 ssh_runner.go:152] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1020 00:06:27.692386  240589 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Oct 19 23:16 /usr/share/ca-certificates/minikubeCA.pem
	I1020 00:06:27.692421  240589 ssh_runner.go:152] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1020 00:06:27.696850  240589 ssh_runner.go:152] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1020 00:06:27.703109  240589 kubeadm.go:390] StartCluster: {Name:embed-certs-20211020000119-6808 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.27-1633027942-12081@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.2 ClusterName:embed-certs-20211020000119-6808 Namespace:default APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.22.2 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil>
ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host}
	I1020 00:06:27.703191  240589 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1020 00:06:27.703226  240589 ssh_runner.go:152] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 00:06:27.725958  240589 cri.go:76] found id: "c1fb767a7419320ecb83a210aa9861ac71bc7880706f49a328e45aad86f82cb6"
	I1020 00:06:27.725977  240589 cri.go:76] found id: "a52ef32275c1f6e0510fa1814a42ff042726906d6c0315c4f19176464722fcf1"
	I1020 00:06:27.725984  240589 cri.go:76] found id: "f23c623e9c9c25357fc2d88529567ef060d92c02c01bb49b286eedee51fa39b9"
	I1020 00:06:27.725991  240589 cri.go:76] found id: "c95619e5276b0546901bf50a62103d60586744738e7fb9e1e7611a5036e0ec15"
	I1020 00:06:27.725998  240589 cri.go:76] found id: "dba9471d048c2830b015bc20b30ad7ec0be98e26a6dae0072eec92115eccf156"
	I1020 00:06:27.726006  240589 cri.go:76] found id: "07797be42368cdb06f549720b43b9a7279fe9f31c163f38f3035e4b790007e1a"
	I1020 00:06:27.726015  240589 cri.go:76] found id: "6b9b04f69dd7091480ce72178a541e0be6e229f9419c8ae5fc4422462f2f196f"
	I1020 00:06:27.726025  240589 cri.go:76] found id: "cde54ed35a44493e20c78309b480f17b39e66c2fb0b5ea8c84aa2975d5a493e8"
	I1020 00:06:27.726036  240589 cri.go:76] found id: ""
	I1020 00:06:27.726080  240589 ssh_runner.go:152] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1020 00:06:27.739226  240589 cri.go:103] JSON = null
	W1020 00:06:27.739271  240589 kubeadm.go:397] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I1020 00:06:27.739317  240589 ssh_runner.go:152] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1020 00:06:27.745997  240589 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I1020 00:06:27.746016  240589 kubeadm.go:600] restartCluster start
	I1020 00:06:27.746068  240589 ssh_runner.go:152] Run: sudo test -d /data/minikube
	I1020 00:06:27.751933  240589 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1020 00:06:27.752721  240589 kubeconfig.go:116] verify returned: extract IP: "embed-certs-20211020000119-6808" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/kubeconfig
	I1020 00:06:27.753031  240589 kubeconfig.go:127] "embed-certs-20211020000119-6808" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/kubeconfig - will repair!
	I1020 00:06:27.753586  240589 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/kubeconfig: {Name:mk397e230cae60dd4f11434c0167f3868cd3be72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 00:06:27.755655  240589 ssh_runner.go:152] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1020 00:06:27.761574  240589 api_server.go:165] Checking apiserver status ...
	I1020 00:06:27.761613  240589 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1020 00:06:27.773653  240589 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1020 00:06:27.974031  240589 api_server.go:165] Checking apiserver status ...
	I1020 00:06:27.974102  240589 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1020 00:06:27.987733  240589 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1020 00:06:28.173901  240589 api_server.go:165] Checking apiserver status ...
	I1020 00:06:28.173986  240589 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1020 00:06:28.187297  240589 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1020 00:06:28.374530  240589 api_server.go:165] Checking apiserver status ...
	I1020 00:06:28.374613  240589 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1020 00:06:28.387936  240589 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1020 00:06:28.574199  240589 api_server.go:165] Checking apiserver status ...
	I1020 00:06:28.574263  240589 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1020 00:06:28.587924  240589 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1020 00:06:28.774223  240589 api_server.go:165] Checking apiserver status ...
	I1020 00:06:28.774299  240589 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1020 00:06:28.787608  240589 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1020 00:06:28.973839  240589 api_server.go:165] Checking apiserver status ...
	I1020 00:06:28.973913  240589 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1020 00:06:28.987356  240589 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1020 00:06:29.174592  240589 api_server.go:165] Checking apiserver status ...
	I1020 00:06:29.174668  240589 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1020 00:06:29.187874  240589 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1020 00:06:26.010454  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:28.509990  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:27.397929  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:29.897706  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:26.846227  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:29.345819  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:29.374417  240589 api_server.go:165] Checking apiserver status ...
	I1020 00:06:29.374488  240589 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1020 00:06:29.388055  240589 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1020 00:06:29.574325  240589 api_server.go:165] Checking apiserver status ...
	I1020 00:06:29.574395  240589 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1020 00:06:29.588045  240589 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1020 00:06:29.774251  240589 api_server.go:165] Checking apiserver status ...
	I1020 00:06:29.774327  240589 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1020 00:06:29.787522  240589 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1020 00:06:29.974759  240589 api_server.go:165] Checking apiserver status ...
	I1020 00:06:29.974850  240589 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1020 00:06:29.988493  240589 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1020 00:06:30.173716  240589 api_server.go:165] Checking apiserver status ...
	I1020 00:06:30.173794  240589 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1020 00:06:30.187149  240589 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1020 00:06:30.374432  240589 api_server.go:165] Checking apiserver status ...
	I1020 00:06:30.374503  240589 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1020 00:06:30.387952  240589 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1020 00:06:30.574194  240589 api_server.go:165] Checking apiserver status ...
	I1020 00:06:30.574264  240589 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1020 00:06:30.588219  240589 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1020 00:06:30.774487  240589 api_server.go:165] Checking apiserver status ...
	I1020 00:06:30.774567  240589 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1020 00:06:30.787750  240589 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1020 00:06:30.787776  240589 api_server.go:165] Checking apiserver status ...
	I1020 00:06:30.787828  240589 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1020 00:06:30.799489  240589 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1020 00:06:30.799509  240589 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I1020 00:06:30.799516  240589 kubeadm.go:1032] stopping kube-system containers ...
	I1020 00:06:30.799525  240589 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1020 00:06:30.799561  240589 ssh_runner.go:152] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 00:06:30.834440  240589 cri.go:76] found id: "c1fb767a7419320ecb83a210aa9861ac71bc7880706f49a328e45aad86f82cb6"
	I1020 00:06:30.834463  240589 cri.go:76] found id: "a52ef32275c1f6e0510fa1814a42ff042726906d6c0315c4f19176464722fcf1"
	I1020 00:06:30.834468  240589 cri.go:76] found id: "f23c623e9c9c25357fc2d88529567ef060d92c02c01bb49b286eedee51fa39b9"
	I1020 00:06:30.834473  240589 cri.go:76] found id: "c95619e5276b0546901bf50a62103d60586744738e7fb9e1e7611a5036e0ec15"
	I1020 00:06:30.834477  240589 cri.go:76] found id: "dba9471d048c2830b015bc20b30ad7ec0be98e26a6dae0072eec92115eccf156"
	I1020 00:06:30.834482  240589 cri.go:76] found id: "07797be42368cdb06f549720b43b9a7279fe9f31c163f38f3035e4b790007e1a"
	I1020 00:06:30.834487  240589 cri.go:76] found id: "6b9b04f69dd7091480ce72178a541e0be6e229f9419c8ae5fc4422462f2f196f"
	I1020 00:06:30.834491  240589 cri.go:76] found id: "cde54ed35a44493e20c78309b480f17b39e66c2fb0b5ea8c84aa2975d5a493e8"
	I1020 00:06:30.834495  240589 cri.go:76] found id: ""
	I1020 00:06:30.834500  240589 cri.go:220] Stopping containers: [c1fb767a7419320ecb83a210aa9861ac71bc7880706f49a328e45aad86f82cb6 a52ef32275c1f6e0510fa1814a42ff042726906d6c0315c4f19176464722fcf1 f23c623e9c9c25357fc2d88529567ef060d92c02c01bb49b286eedee51fa39b9 c95619e5276b0546901bf50a62103d60586744738e7fb9e1e7611a5036e0ec15 dba9471d048c2830b015bc20b30ad7ec0be98e26a6dae0072eec92115eccf156 07797be42368cdb06f549720b43b9a7279fe9f31c163f38f3035e4b790007e1a 6b9b04f69dd7091480ce72178a541e0be6e229f9419c8ae5fc4422462f2f196f cde54ed35a44493e20c78309b480f17b39e66c2fb0b5ea8c84aa2975d5a493e8]
	I1020 00:06:30.834539  240589 ssh_runner.go:152] Run: which crictl
	I1020 00:06:30.837524  240589 ssh_runner.go:152] Run: sudo /usr/bin/crictl stop c1fb767a7419320ecb83a210aa9861ac71bc7880706f49a328e45aad86f82cb6 a52ef32275c1f6e0510fa1814a42ff042726906d6c0315c4f19176464722fcf1 f23c623e9c9c25357fc2d88529567ef060d92c02c01bb49b286eedee51fa39b9 c95619e5276b0546901bf50a62103d60586744738e7fb9e1e7611a5036e0ec15 dba9471d048c2830b015bc20b30ad7ec0be98e26a6dae0072eec92115eccf156 07797be42368cdb06f549720b43b9a7279fe9f31c163f38f3035e4b790007e1a 6b9b04f69dd7091480ce72178a541e0be6e229f9419c8ae5fc4422462f2f196f cde54ed35a44493e20c78309b480f17b39e66c2fb0b5ea8c84aa2975d5a493e8
	I1020 00:06:30.861801  240589 ssh_runner.go:152] Run: sudo systemctl stop kubelet
	I1020 00:06:30.871303  240589 ssh_runner.go:152] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1020 00:06:30.877932  240589 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5639 Oct 20 00:04 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Oct 20 00:04 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 Oct 20 00:05 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Oct 20 00:04 /etc/kubernetes/scheduler.conf
	
	I1020 00:06:30.877975  240589 ssh_runner.go:152] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1020 00:06:30.884278  240589 ssh_runner.go:152] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1020 00:06:30.890497  240589 ssh_runner.go:152] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1020 00:06:30.896978  240589 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1020 00:06:30.897024  240589 ssh_runner.go:152] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1020 00:06:30.902937  240589 ssh_runner.go:152] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1020 00:06:30.909136  240589 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1020 00:06:30.909170  240589 ssh_runner.go:152] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1020 00:06:30.915073  240589 ssh_runner.go:152] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1020 00:06:30.921165  240589 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1020 00:06:30.921183  240589 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1020 00:06:30.966403  240589 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1020 00:06:31.966559  240589 ssh_runner.go:192] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.000129219s)
	I1020 00:06:31.966587  240589 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1020 00:06:32.090848  240589 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1020 00:06:32.145647  240589 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1020 00:06:32.225526  240589 api_server.go:51] waiting for apiserver process to appear ...
	I1020 00:06:32.225581  240589 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 00:06:32.739171  240589 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 00:06:33.239115  240589 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 00:06:33.739297  240589 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 00:06:34.239356  240589 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 00:06:30.510423  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:33.010933  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:31.897825  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:34.397777  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:31.345905  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:33.846052  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:34.738792  240589 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 00:06:35.238858  240589 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 00:06:35.739612  240589 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 00:06:36.238993  240589 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 00:06:36.738977  240589 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 00:06:37.239588  240589 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 00:06:37.739297  240589 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 00:06:38.239516  240589 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 00:06:38.738960  240589 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 00:06:38.756280  240589 api_server.go:71] duration metric: took 6.530753166s to wait for apiserver process to appear ...
	I1020 00:06:38.756305  240589 api_server.go:87] waiting for apiserver healthz status ...
	I1020 00:06:38.756316  240589 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1020 00:06:35.509720  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:37.510415  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:39.510776  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:36.897685  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:39.397842  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:36.346821  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:38.349512  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:40.846565  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:42.244630  240589 api_server.go:266] https://192.168.58.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1020 00:06:42.244657  240589 api_server.go:102] status: https://192.168.58.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1020 00:06:42.744872  240589 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1020 00:06:42.749162  240589 api_server.go:266] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1020 00:06:42.749187  240589 api_server.go:102] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1020 00:06:43.245750  240589 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1020 00:06:43.250901  240589 api_server.go:266] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1020 00:06:43.250932  240589 api_server.go:102] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1020 00:06:43.745156  240589 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1020 00:06:43.749488  240589 api_server.go:266] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1020 00:06:43.755054  240589 api_server.go:140] control plane version: v1.22.2
	I1020 00:06:43.755076  240589 api_server.go:130] duration metric: took 4.998765151s to wait for apiserver health ...
	I1020 00:06:43.755087  240589 cni.go:93] Creating CNI manager for ""
	I1020 00:06:43.755094  240589 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1020 00:06:43.757314  240589 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I1020 00:06:43.757377  240589 ssh_runner.go:152] Run: stat /opt/cni/bin/portmap
	I1020 00:06:43.761031  240589 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.22.2/kubectl ...
	I1020 00:06:43.761051  240589 ssh_runner.go:319] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I1020 00:06:43.773672  240589 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1020 00:06:43.919885  240589 system_pods.go:43] waiting for kube-system pods to appear ...
	I1020 00:06:43.931132  240589 system_pods.go:59] 9 kube-system pods found
	I1020 00:06:43.931161  240589 system_pods.go:61] "coredns-78fcd69978-fg4q9" [f034ef5b-a0da-4301-9a1f-44df829113d4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 00:06:43.931169  240589 system_pods.go:61] "etcd-embed-certs-20211020000119-6808" [8eb4c69b-031a-444a-a28c-a8f281925a8e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1020 00:06:43.931174  240589 system_pods.go:61] "kindnet-5p66p" [d20601c0-5621-4a57-90ab-643005978f0c] Running
	I1020 00:06:43.931178  240589 system_pods.go:61] "kube-apiserver-embed-certs-20211020000119-6808" [824e320d-c60f-4fd8-867e-bcec1f4bb811] Running
	I1020 00:06:43.931182  240589 system_pods.go:61] "kube-controller-manager-embed-certs-20211020000119-6808" [3eeae4c1-1a77-4c22-8f42-f7c9e0cf69f4] Running
	I1020 00:06:43.931186  240589 system_pods.go:61] "kube-proxy-6kjxd" [08322047-768f-4cab-ba86-f4b9086a3349] Running
	I1020 00:06:43.931191  240589 system_pods.go:61] "kube-scheduler-embed-certs-20211020000119-6808" [f621e964-08b6-486f-b7e2-aacf75e911af] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1020 00:06:43.931196  240589 system_pods.go:61] "metrics-server-7c784ccb57-2df4s" [a75689b5-73cd-4993-a4ff-de7005c47eb0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 00:06:43.931200  240589 system_pods.go:61] "storage-provisioner" [ae570658-56f1-42cc-927b-390d4b9e7fe7] Running
	I1020 00:06:43.931205  240589 system_pods.go:74] duration metric: took 11.297918ms to wait for pod list to return data ...
	I1020 00:06:43.931211  240589 node_conditions.go:102] verifying NodePressure condition ...
	I1020 00:06:43.934362  240589 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I1020 00:06:43.934385  240589 node_conditions.go:123] node cpu capacity is 8
	I1020 00:06:43.934394  240589 node_conditions.go:105] duration metric: took 3.179694ms to run NodePressure ...
	I1020 00:06:43.934408  240589 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1020 00:06:44.086190  240589 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I1020 00:06:44.089799  240589 kubeadm.go:746] kubelet initialised
	I1020 00:06:44.089816  240589 kubeadm.go:747] duration metric: took 3.605637ms waiting for restarted kubelet to initialise ...
	I1020 00:06:44.089823  240589 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1020 00:06:44.094220  240589 pod_ready.go:78] waiting up to 4m0s for pod "coredns-78fcd69978-fg4q9" in "kube-system" namespace to be "Ready" ...
	I1020 00:06:42.010385  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:44.010846  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:41.897843  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:43.898084  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:45.898550  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:43.346797  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:45.847238  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:46.107960  240589 pod_ready.go:102] pod "coredns-78fcd69978-fg4q9" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:48.607158  240589 pod_ready.go:102] pod "coredns-78fcd69978-fg4q9" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:46.509986  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:48.510080  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:48.398002  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:50.398483  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:48.346824  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:50.846281  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:50.107914  240589 pod_ready.go:92] pod "coredns-78fcd69978-fg4q9" in "kube-system" namespace has status "Ready":"True"
	I1020 00:06:50.107938  240589 pod_ready.go:81] duration metric: took 6.013692429s waiting for pod "coredns-78fcd69978-fg4q9" in "kube-system" namespace to be "Ready" ...
	I1020 00:06:50.107950  240589 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-20211020000119-6808" in "kube-system" namespace to be "Ready" ...
	I1020 00:06:51.118058  240589 pod_ready.go:92] pod "etcd-embed-certs-20211020000119-6808" in "kube-system" namespace has status "Ready":"True"
	I1020 00:06:51.118083  240589 pod_ready.go:81] duration metric: took 1.010125287s waiting for pod "etcd-embed-certs-20211020000119-6808" in "kube-system" namespace to be "Ready" ...
	I1020 00:06:51.118100  240589 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-20211020000119-6808" in "kube-system" namespace to be "Ready" ...
	I1020 00:06:52.627218  240589 pod_ready.go:92] pod "kube-apiserver-embed-certs-20211020000119-6808" in "kube-system" namespace has status "Ready":"True"
	I1020 00:06:52.627242  240589 pod_ready.go:81] duration metric: took 1.50913362s waiting for pod "kube-apiserver-embed-certs-20211020000119-6808" in "kube-system" namespace to be "Ready" ...
	I1020 00:06:52.627253  240589 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-20211020000119-6808" in "kube-system" namespace to be "Ready" ...
	I1020 00:06:52.632785  240589 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20211020000119-6808" in "kube-system" namespace has status "Ready":"True"
	I1020 00:06:52.632836  240589 pod_ready.go:81] duration metric: took 5.575451ms waiting for pod "kube-controller-manager-embed-certs-20211020000119-6808" in "kube-system" namespace to be "Ready" ...
	I1020 00:06:52.632864  240589 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6kjxd" in "kube-system" namespace to be "Ready" ...
	I1020 00:06:52.637065  240589 pod_ready.go:92] pod "kube-proxy-6kjxd" in "kube-system" namespace has status "Ready":"True"
	I1020 00:06:52.637081  240589 pod_ready.go:81] duration metric: took 4.198274ms waiting for pod "kube-proxy-6kjxd" in "kube-system" namespace to be "Ready" ...
	I1020 00:06:52.637089  240589 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-20211020000119-6808" in "kube-system" namespace to be "Ready" ...
	I1020 00:06:53.646143  240589 pod_ready.go:92] pod "kube-scheduler-embed-certs-20211020000119-6808" in "kube-system" namespace has status "Ready":"True"
	I1020 00:06:53.646167  240589 pod_ready.go:81] duration metric: took 1.009072799s waiting for pod "kube-scheduler-embed-certs-20211020000119-6808" in "kube-system" namespace to be "Ready" ...
	I1020 00:06:53.646176  240589 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace to be "Ready" ...
	I1020 00:06:51.011173  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:53.510555  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:52.898512  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:55.398157  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:53.346739  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:55.347046  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:55.711508  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:57.712310  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:56.010028  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:58.010437  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:57.898010  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:00.397302  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:57.846483  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:00.345776  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:06:59.712758  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:02.211666  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:00.510404  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:03.010275  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:05.010790  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:02.398696  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:04.897626  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:02.345881  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:04.346322  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:04.710829  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:06.711303  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:08.711429  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:07.509992  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:09.510323  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:06.897686  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:09.397668  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:06.346456  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:08.845864  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:10.711549  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:13.211454  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:12.010234  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:14.509664  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:11.897468  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:13.897593  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:15.898039  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:11.346281  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:13.346656  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:15.846868  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:15.212302  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:17.710951  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:16.510959  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:19.009950  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:18.398003  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:20.398246  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:18.346750  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:20.845766  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:19.711340  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:21.711558  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:24.210455  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:21.510420  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:24.011118  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:22.897752  217964 pod_ready.go:102] pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:24.393657  217964 pod_ready.go:81] duration metric: took 4m0.383160461s waiting for pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace to be "Ready" ...
	E1020 00:07:24.393680  217964 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-fq5mz" in "kube-system" namespace to be "Ready" (will not retry!)
	I1020 00:07:24.393717  217964 pod_ready.go:38] duration metric: took 4m13.926189528s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1020 00:07:24.393759  217964 kubeadm.go:604] restartCluster took 4m30.223477483s
	W1020 00:07:24.393900  217964 out.go:241] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1020 00:07:24.393937  217964 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.3-rc.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1020 00:07:22.845867  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:24.847139  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:26.211224  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:28.211665  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:26.510388  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:29.009786  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:27.703411  217964 ssh_runner.go:192] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.3-rc.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (3.309447788s)
	I1020 00:07:27.703482  217964 ssh_runner.go:152] Run: sudo systemctl stop -f kubelet
	I1020 00:07:27.714110  217964 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1020 00:07:27.714196  217964 ssh_runner.go:152] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 00:07:27.737171  217964 cri.go:76] found id: ""
	I1020 00:07:27.737232  217964 ssh_runner.go:152] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1020 00:07:27.744890  217964 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I1020 00:07:27.744945  217964 ssh_runner.go:152] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1020 00:07:27.751572  217964 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1020 00:07:27.751617  217964 ssh_runner.go:243] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.3-rc.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1020 00:07:27.347106  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:29.846020  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:30.711728  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:33.211464  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:31.010733  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:33.510706  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:31.846057  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:33.846597  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:35.846679  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:35.213797  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:37.711700  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:36.010508  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:38.509836  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:37.846785  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:39.847032  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:42.144522  217964 out.go:203]   - Generating certificates and keys ...
	I1020 00:07:42.147308  217964 out.go:203]   - Booting up control plane ...
	I1020 00:07:42.149841  217964 out.go:203]   - Configuring RBAC rules ...
	I1020 00:07:42.151880  217964 cni.go:93] Creating CNI manager for ""
	I1020 00:07:42.151896  217964 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1020 00:07:40.210971  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:42.211584  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:40.510537  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:42.511178  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:45.011088  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:42.153553  217964 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I1020 00:07:42.153608  217964 ssh_runner.go:152] Run: stat /opt/cni/bin/portmap
	I1020 00:07:42.157249  217964 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl ...
	I1020 00:07:42.157274  217964 ssh_runner.go:319] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I1020 00:07:42.170228  217964 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1020 00:07:42.330213  217964 ssh_runner.go:152] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1020 00:07:42.330271  217964 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:07:42.330302  217964 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl label nodes minikube.k8s.io/version=v1.23.2 minikube.k8s.io/commit=4e20255d8fb01d11f0b16be976bdc5b5661029f6 minikube.k8s.io/name=no-preload-20211020000033-6808 minikube.k8s.io/updated_at=2021_10_20T00_07_42_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:07:42.347371  217964 ops.go:34] apiserver oom_adj: -16
	I1020 00:07:42.438482  217964 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:07:42.991553  217964 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:07:43.491084  217964 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:07:43.991660  217964 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:07:44.491106  217964 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:07:44.991625  217964 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:07:45.491988  217964 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:07:45.991775  217964 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:07:42.346740  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:44.845580  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:44.711270  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:46.711620  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:48.711772  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:47.011459  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:49.510825  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:46.491968  217964 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:07:46.991135  217964 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:07:47.491341  217964 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:07:47.991758  217964 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:07:48.491464  217964 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:07:48.991469  217964 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:07:49.491286  217964 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:07:49.991003  217964 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:07:50.491666  217964 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:07:50.991660  217964 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:07:46.846761  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:49.346661  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:51.211576  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:53.711189  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:51.510943  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:53.511349  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:51.491183  217964 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:07:51.991111  217964 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:07:52.491394  217964 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:07:52.991231  217964 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:07:53.491033  217964 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:07:53.991015  217964 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:07:54.491192  217964 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:07:54.991572  217964 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:07:55.217289  217964 kubeadm.go:985] duration metric: took 12.887077795s to wait for elevateKubeSystemPrivileges.
	I1020 00:07:55.217327  217964 kubeadm.go:392] StartCluster complete in 5m1.088683309s
	I1020 00:07:55.217350  217964 settings.go:142] acquiring lock: {Name:mkae5b2371003bb83973886295163ea24b8b77a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 00:07:55.217439  217964 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/kubeconfig
	I1020 00:07:55.219216  217964 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/kubeconfig: {Name:mk397e230cae60dd4f11434c0167f3868cd3be72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 00:07:55.735530  217964 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20211020000033-6808" rescaled to 1
	I1020 00:07:55.735591  217964 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.22.3-rc.0 ControlPlane:true Worker:true}
	I1020 00:07:55.738246  217964 out.go:176] * Verifying Kubernetes components...
	I1020 00:07:55.735637  217964 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1020 00:07:55.735651  217964 addons.go:404] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I1020 00:07:55.738434  217964 addons.go:65] Setting storage-provisioner=true in profile "no-preload-20211020000033-6808"
	I1020 00:07:55.738455  217964 addons.go:153] Setting addon storage-provisioner=true in "no-preload-20211020000033-6808"
	W1020 00:07:55.738466  217964 addons.go:165] addon storage-provisioner should already be in state true
	I1020 00:07:55.738478  217964 addons.go:65] Setting dashboard=true in profile "no-preload-20211020000033-6808"
	I1020 00:07:55.738492  217964 addons.go:65] Setting metrics-server=true in profile "no-preload-20211020000033-6808"
	I1020 00:07:55.738500  217964 host.go:66] Checking if "no-preload-20211020000033-6808" exists ...
	I1020 00:07:55.738501  217964 addons.go:153] Setting addon dashboard=true in "no-preload-20211020000033-6808"
	W1020 00:07:55.738513  217964 addons.go:165] addon dashboard should already be in state true
	I1020 00:07:55.738519  217964 addons.go:153] Setting addon metrics-server=true in "no-preload-20211020000033-6808"
	W1020 00:07:55.738530  217964 addons.go:165] addon metrics-server should already be in state true
	I1020 00:07:55.738544  217964 host.go:66] Checking if "no-preload-20211020000033-6808" exists ...
	I1020 00:07:55.738558  217964 host.go:66] Checking if "no-preload-20211020000033-6808" exists ...
	I1020 00:07:55.738479  217964 addons.go:65] Setting default-storageclass=true in profile "no-preload-20211020000033-6808"
	I1020 00:07:55.738613  217964 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20211020000033-6808"
	I1020 00:07:55.738308  217964 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
	I1020 00:07:55.735871  217964 config.go:176] Loaded profile config "no-preload-20211020000033-6808": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.3-rc.0
	I1020 00:07:55.738917  217964 cli_runner.go:115] Run: docker container inspect no-preload-20211020000033-6808 --format={{.State.Status}}
	I1020 00:07:55.739004  217964 cli_runner.go:115] Run: docker container inspect no-preload-20211020000033-6808 --format={{.State.Status}}
	I1020 00:07:55.739010  217964 cli_runner.go:115] Run: docker container inspect no-preload-20211020000033-6808 --format={{.State.Status}}
	I1020 00:07:55.739009  217964 cli_runner.go:115] Run: docker container inspect no-preload-20211020000033-6808 --format={{.State.Status}}
	I1020 00:07:55.808883  217964 out.go:176]   - Using image kubernetesui/dashboard:v2.3.1
	I1020 00:07:55.810465  217964 out.go:176]   - Using image k8s.gcr.io/echoserver:1.4
	I1020 00:07:55.810520  217964 addons.go:337] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1020 00:07:55.810531  217964 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1020 00:07:55.810582  217964 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211020000033-6808
	I1020 00:07:55.809042  217964 addons.go:153] Setting addon default-storageclass=true in "no-preload-20211020000033-6808"
	W1020 00:07:55.810793  217964 addons.go:165] addon default-storageclass should already be in state true
	I1020 00:07:55.810827  217964 host.go:66] Checking if "no-preload-20211020000033-6808" exists ...
	I1020 00:07:55.813722  217964 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1020 00:07:55.811173  217964 cli_runner.go:115] Run: docker container inspect no-preload-20211020000033-6808 --format={{.State.Status}}
	I1020 00:07:55.813840  217964 addons.go:337] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 00:07:55.813855  217964 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1020 00:07:55.815456  217964 out.go:176]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I1020 00:07:55.815523  217964 addons.go:337] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1020 00:07:55.815536  217964 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I1020 00:07:55.813904  217964 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211020000033-6808
	I1020 00:07:55.815582  217964 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211020000033-6808
	I1020 00:07:55.829773  217964 node_ready.go:35] waiting up to 6m0s for node "no-preload-20211020000033-6808" to be "Ready" ...
	I1020 00:07:55.830179  217964 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1020 00:07:55.833048  217964 node_ready.go:49] node "no-preload-20211020000033-6808" has status "Ready":"True"
	I1020 00:07:55.833065  217964 node_ready.go:38] duration metric: took 3.252993ms waiting for node "no-preload-20211020000033-6808" to be "Ready" ...
	I1020 00:07:55.833076  217964 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1020 00:07:55.838688  217964 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-cfgnn" in "kube-system" namespace to be "Ready" ...
	I1020 00:07:55.873133  217964 addons.go:337] installing /etc/kubernetes/addons/storageclass.yaml
	I1020 00:07:55.873158  217964 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1020 00:07:55.873213  217964 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211020000033-6808
	I1020 00:07:55.879925  217964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33000 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/no-preload-20211020000033-6808/id_rsa Username:docker}
	I1020 00:07:55.883039  217964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33000 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/no-preload-20211020000033-6808/id_rsa Username:docker}
	I1020 00:07:55.890722  217964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33000 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/no-preload-20211020000033-6808/id_rsa Username:docker}
	I1020 00:07:55.923946  217964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33000 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/no-preload-20211020000033-6808/id_rsa Username:docker}
	I1020 00:07:51.846390  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:54.346641  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:56.129047  217964 addons.go:337] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1020 00:07:56.129076  217964 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1020 00:07:56.129112  217964 addons.go:337] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1020 00:07:56.129127  217964 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I1020 00:07:56.130934  217964 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1020 00:07:56.131644  217964 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 00:07:56.144939  217964 addons.go:337] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1020 00:07:56.144982  217964 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1020 00:07:56.228011  217964 addons.go:337] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1020 00:07:56.228041  217964 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I1020 00:07:56.233470  217964 addons.go:337] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1020 00:07:56.233491  217964 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1020 00:07:56.315009  217964 start.go:729] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I1020 00:07:56.318511  217964 addons.go:337] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1020 00:07:56.318542  217964 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I1020 00:07:56.326408  217964 addons.go:337] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1020 00:07:56.326432  217964 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I1020 00:07:56.415567  217964 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1020 00:07:56.418479  217964 addons.go:337] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1020 00:07:56.418503  217964 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1020 00:07:56.435372  217964 addons.go:337] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1020 00:07:56.435395  217964 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1020 00:07:56.527801  217964 addons.go:337] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1020 00:07:56.527862  217964 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1020 00:07:56.544511  217964 addons.go:337] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1020 00:07:56.544538  217964 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1020 00:07:56.725289  217964 addons.go:337] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1020 00:07:56.725367  217964 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1020 00:07:56.825193  217964 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1020 00:07:57.433165  217964 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.017530987s)
	I1020 00:07:57.433302  217964 addons.go:375] Verifying addon metrics-server=true in "no-preload-20211020000033-6808"
	I1020 00:07:57.923014  217964 pod_ready.go:102] pod "coredns-78fcd69978-cfgnn" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:58.213490  217964 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.3-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.38824505s)
	I1020 00:07:55.711375  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:58.211495  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:56.010655  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:58.510477  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:58.215639  217964 out.go:176] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I1020 00:07:58.215670  217964 addons.go:406] enableAddons completed in 2.480020377s
	I1020 00:08:00.420921  217964 pod_ready.go:102] pod "coredns-78fcd69978-cfgnn" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:56.847781  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:07:58.848054  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:00.852554  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:02.348310  217964 pod_ready.go:92] pod "coredns-78fcd69978-cfgnn" in "kube-system" namespace has status "Ready":"True"
	I1020 00:08:02.348334  217964 pod_ready.go:81] duration metric: took 6.509614796s waiting for pod "coredns-78fcd69978-cfgnn" in "kube-system" namespace to be "Ready" ...
	I1020 00:08:02.348347  217964 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-20211020000033-6808" in "kube-system" namespace to be "Ready" ...
	I1020 00:08:02.352320  217964 pod_ready.go:92] pod "etcd-no-preload-20211020000033-6808" in "kube-system" namespace has status "Ready":"True"
	I1020 00:08:02.352335  217964 pod_ready.go:81] duration metric: took 3.98209ms waiting for pod "etcd-no-preload-20211020000033-6808" in "kube-system" namespace to be "Ready" ...
	I1020 00:08:02.352350  217964 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-20211020000033-6808" in "kube-system" namespace to be "Ready" ...
	I1020 00:08:02.356449  217964 pod_ready.go:92] pod "kube-apiserver-no-preload-20211020000033-6808" in "kube-system" namespace has status "Ready":"True"
	I1020 00:08:02.356466  217964 pod_ready.go:81] duration metric: took 4.108911ms waiting for pod "kube-apiserver-no-preload-20211020000033-6808" in "kube-system" namespace to be "Ready" ...
	I1020 00:08:02.356477  217964 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-20211020000033-6808" in "kube-system" namespace to be "Ready" ...
	I1020 00:08:02.360474  217964 pod_ready.go:92] pod "kube-controller-manager-no-preload-20211020000033-6808" in "kube-system" namespace has status "Ready":"True"
	I1020 00:08:02.360490  217964 pod_ready.go:81] duration metric: took 4.005248ms waiting for pod "kube-controller-manager-no-preload-20211020000033-6808" in "kube-system" namespace to be "Ready" ...
	I1020 00:08:02.360501  217964 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4hhrk" in "kube-system" namespace to be "Ready" ...
	I1020 00:08:02.365409  217964 pod_ready.go:92] pod "kube-proxy-4hhrk" in "kube-system" namespace has status "Ready":"True"
	I1020 00:08:02.365426  217964 pod_ready.go:81] duration metric: took 4.91854ms waiting for pod "kube-proxy-4hhrk" in "kube-system" namespace to be "Ready" ...
	I1020 00:08:02.365435  217964 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-20211020000033-6808" in "kube-system" namespace to be "Ready" ...
	I1020 00:08:02.745032  217964 pod_ready.go:92] pod "kube-scheduler-no-preload-20211020000033-6808" in "kube-system" namespace has status "Ready":"True"
	I1020 00:08:02.745052  217964 pod_ready.go:81] duration metric: took 379.609406ms waiting for pod "kube-scheduler-no-preload-20211020000033-6808" in "kube-system" namespace to be "Ready" ...
	I1020 00:08:02.745062  217964 pod_ready.go:38] duration metric: took 6.911974423s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1020 00:08:02.745085  217964 api_server.go:51] waiting for apiserver process to appear ...
	I1020 00:08:02.745133  217964 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 00:08:02.769571  217964 api_server.go:71] duration metric: took 7.033945394s to wait for apiserver process to appear ...
	I1020 00:08:02.769596  217964 api_server.go:87] waiting for apiserver healthz status ...
	I1020 00:08:02.769607  217964 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1020 00:08:02.774610  217964 api_server.go:266] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1020 00:08:02.775430  217964 api_server.go:140] control plane version: v1.22.3-rc.0
	I1020 00:08:02.775456  217964 api_server.go:130] duration metric: took 5.850374ms to wait for apiserver health ...
	I1020 00:08:02.775467  217964 system_pods.go:43] waiting for kube-system pods to appear ...
	I1020 00:08:02.948826  217964 system_pods.go:59] 9 kube-system pods found
	I1020 00:08:02.948857  217964 system_pods.go:61] "coredns-78fcd69978-cfgnn" [13e4123e-ecd5-4af3-abf8-f2edc2338e97] Running
	I1020 00:08:02.948866  217964 system_pods.go:61] "etcd-no-preload-20211020000033-6808" [c8a3f71b-8d8b-4eb7-acdd-08ec2df16b25] Running
	I1020 00:08:02.948873  217964 system_pods.go:61] "kindnet-ql4j9" [7a45d44d-39ec-4d0c-9786-5527e9bf2cd3] Running
	I1020 00:08:02.948882  217964 system_pods.go:61] "kube-apiserver-no-preload-20211020000033-6808" [62e00e4f-3a3f-4f4a-9763-441d986e55d7] Running
	I1020 00:08:02.948896  217964 system_pods.go:61] "kube-controller-manager-no-preload-20211020000033-6808" [ed578b8c-ed47-42ba-9326-65b0a93186f4] Running
	I1020 00:08:02.948906  217964 system_pods.go:61] "kube-proxy-4hhrk" [50ade4ef-0316-42ff-977b-f03d8b325bd7] Running
	I1020 00:08:02.948913  217964 system_pods.go:61] "kube-scheduler-no-preload-20211020000033-6808" [4a01822f-284b-4ca8-be6d-6320ab89593e] Running
	I1020 00:08:02.948924  217964 system_pods.go:61] "metrics-server-7c784ccb57-f6x62" [ed106f29-92f6-4616-a800-f9aed70e733d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 00:08:02.948936  217964 system_pods.go:61] "storage-provisioner" [05447f48-bd32-4072-ac75-5fa7c4f412b0] Running
	I1020 00:08:02.948948  217964 system_pods.go:74] duration metric: took 173.474347ms to wait for pod list to return data ...
	I1020 00:08:02.948962  217964 default_sa.go:34] waiting for default service account to be created ...
	I1020 00:08:03.145070  217964 default_sa.go:45] found service account: "default"
	I1020 00:08:03.145092  217964 default_sa.go:55] duration metric: took 196.12081ms for default service account to be created ...
	I1020 00:08:03.145099  217964 system_pods.go:116] waiting for k8s-apps to be running ...
	I1020 00:08:03.349005  217964 system_pods.go:86] 9 kube-system pods found
	I1020 00:08:03.349036  217964 system_pods.go:89] "coredns-78fcd69978-cfgnn" [13e4123e-ecd5-4af3-abf8-f2edc2338e97] Running
	I1020 00:08:03.349044  217964 system_pods.go:89] "etcd-no-preload-20211020000033-6808" [c8a3f71b-8d8b-4eb7-acdd-08ec2df16b25] Running
	I1020 00:08:03.349052  217964 system_pods.go:89] "kindnet-ql4j9" [7a45d44d-39ec-4d0c-9786-5527e9bf2cd3] Running
	I1020 00:08:03.349059  217964 system_pods.go:89] "kube-apiserver-no-preload-20211020000033-6808" [62e00e4f-3a3f-4f4a-9763-441d986e55d7] Running
	I1020 00:08:03.349069  217964 system_pods.go:89] "kube-controller-manager-no-preload-20211020000033-6808" [ed578b8c-ed47-42ba-9326-65b0a93186f4] Running
	I1020 00:08:03.349078  217964 system_pods.go:89] "kube-proxy-4hhrk" [50ade4ef-0316-42ff-977b-f03d8b325bd7] Running
	I1020 00:08:03.349086  217964 system_pods.go:89] "kube-scheduler-no-preload-20211020000033-6808" [4a01822f-284b-4ca8-be6d-6320ab89593e] Running
	I1020 00:08:03.349097  217964 system_pods.go:89] "metrics-server-7c784ccb57-f6x62" [ed106f29-92f6-4616-a800-f9aed70e733d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 00:08:03.349108  217964 system_pods.go:89] "storage-provisioner" [05447f48-bd32-4072-ac75-5fa7c4f412b0] Running
	I1020 00:08:03.349120  217964 system_pods.go:126] duration metric: took 204.015969ms to wait for k8s-apps to be running ...
	I1020 00:08:03.349131  217964 system_svc.go:44] waiting for kubelet service to be running ....
	I1020 00:08:03.349174  217964 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
	I1020 00:08:03.360135  217964 system_svc.go:56] duration metric: took 10.998521ms WaitForService to wait for kubelet.
	I1020 00:08:03.360157  217964 kubeadm.go:547] duration metric: took 7.624534921s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1020 00:08:03.360175  217964 node_conditions.go:102] verifying NodePressure condition ...
	I1020 00:08:03.545634  217964 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I1020 00:08:03.545662  217964 node_conditions.go:123] node cpu capacity is 8
	I1020 00:08:03.545677  217964 node_conditions.go:105] duration metric: took 185.498205ms to run NodePressure ...
	I1020 00:08:03.545689  217964 start.go:231] waiting for startup goroutines ...
	I1020 00:08:03.591899  217964 start.go:462] kubectl: 1.20.5, cluster: 1.22.3-rc.0 (minor skew: 2)
	I1020 00:08:03.596167  217964 out.go:176] 
	W1020 00:08:03.596327  217964 out.go:241] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.3-rc.0.
	I1020 00:08:03.598195  217964 out.go:176]   - Want kubectl v1.22.3-rc.0? Try 'minikube kubectl -- get pods -A'
	I1020 00:08:03.600119  217964 out.go:176] * Done! kubectl is now configured to use "no-preload-20211020000033-6808" cluster and "default" namespace by default
	I1020 00:08:00.711172  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:02.786579  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:00.510709  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:03.013660  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:03.347464  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:05.846255  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:05.212036  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:07.711712  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:05.509870  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:07.510555  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:10.010309  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:07.846723  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:10.346355  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:10.211546  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:12.711063  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:12.010697  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:14.510675  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:12.346781  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:14.846938  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:15.211810  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:17.215154  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:17.009773  219746 pod_ready.go:102] pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:17.506010  219746 pod_ready.go:81] duration metric: took 4m0.401043188s waiting for pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace to be "Ready" ...
	E1020 00:08:17.506035  219746 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-8546d8b77b-w2sg7" in "kube-system" namespace to be "Ready" (will not retry!)
	I1020 00:08:17.506059  219746 pod_ready.go:38] duration metric: took 4m3.721290611s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1020 00:08:17.506093  219746 kubeadm.go:604] restartCluster took 5m15.062777315s
	W1020 00:08:17.506227  219746 out.go:241] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1020 00:08:17.506266  219746 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.14.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1020 00:08:17.346810  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:19.347735  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:19.711284  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:22.215935  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:21.361777  219746 ssh_runner.go:192] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.14.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (3.855484794s)
	I1020 00:08:21.361853  219746 ssh_runner.go:152] Run: sudo systemctl stop -f kubelet
	I1020 00:08:21.372161  219746 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1020 00:08:21.372221  219746 ssh_runner.go:152] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 00:08:21.396947  219746 cri.go:76] found id: ""
	I1020 00:08:21.397011  219746 ssh_runner.go:152] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1020 00:08:21.404159  219746 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I1020 00:08:21.404233  219746 ssh_runner.go:152] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1020 00:08:21.410834  219746 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1020 00:08:21.410879  219746 ssh_runner.go:243] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.14.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1020 00:08:21.846615  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:23.846773  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:24.711365  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:27.213578  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:26.346501  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:28.846262  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:30.847610  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:33.071488  219746 out.go:203]   - Generating certificates and keys ...
	I1020 00:08:33.074047  219746 out.go:203]   - Booting up control plane ...
	I1020 00:08:33.076569  219746 out.go:203]   - Configuring RBAC rules ...
	I1020 00:08:33.078531  219746 cni.go:93] Creating CNI manager for ""
	I1020 00:08:33.078549  219746 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1020 00:08:29.710624  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:31.710767  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:33.710938  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:33.080226  219746 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I1020 00:08:33.080290  219746 ssh_runner.go:152] Run: stat /opt/cni/bin/portmap
	I1020 00:08:33.083720  219746 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.14.0/kubectl ...
	I1020 00:08:33.083735  219746 ssh_runner.go:319] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I1020 00:08:33.095852  219746 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1020 00:08:33.300572  219746 ssh_runner.go:152] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1020 00:08:33.300646  219746 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:08:33.300663  219746 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl label nodes minikube.k8s.io/version=v1.23.2 minikube.k8s.io/commit=4e20255d8fb01d11f0b16be976bdc5b5661029f6 minikube.k8s.io/name=old-k8s-version-20211020000023-6808 minikube.k8s.io/updated_at=2021_10_20T00_08_33_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:08:33.315863  219746 ops.go:34] apiserver oom_adj: 16
	I1020 00:08:33.315888  219746 ops.go:39] adjusting apiserver oom_adj to -10
	I1020 00:08:33.315903  219746 ssh_runner.go:152] Run: /bin/bash -c "echo -10 | sudo tee /proc/$(pgrep kube-apiserver)/oom_adj"
	I1020 00:08:33.424363  219746 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:08:33.985260  219746 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:08:34.485258  219746 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:08:34.985391  219746 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:08:32.847885  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:35.345784  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:36.210476  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:38.211055  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:35.485080  219746 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:08:35.984927  219746 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:08:36.484734  219746 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:08:36.984836  219746 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:08:37.485084  219746 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:08:37.985614  219746 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:08:38.485004  219746 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:08:38.985630  219746 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:08:39.485598  219746 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:08:39.984957  219746 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:08:37.345941  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:39.846062  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:40.211341  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:42.711586  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:40.485621  219746 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:08:40.985685  219746 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:08:41.484932  219746 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:08:41.985040  219746 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:08:42.485382  219746 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:08:42.985056  219746 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:08:43.484799  219746 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:08:43.985351  219746 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:08:44.485115  219746 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:08:44.985648  219746 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:08:42.346242  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:44.845735  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:45.484713  219746 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:08:45.985006  219746 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:08:46.485030  219746 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:08:46.985024  219746 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:08:47.485096  219746 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:08:47.550489  219746 kubeadm.go:985] duration metric: took 14.249908695s to wait for elevateKubeSystemPrivileges.
	I1020 00:08:47.550521  219746 kubeadm.go:392] StartCluster complete in 5m45.148388324s
	I1020 00:08:47.550537  219746 settings.go:142] acquiring lock: {Name:mkae5b2371003bb83973886295163ea24b8b77a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 00:08:47.550609  219746 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/kubeconfig
	I1020 00:08:47.551529  219746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/kubeconfig: {Name:mk397e230cae60dd4f11434c0167f3868cd3be72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 00:08:48.067140  219746 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "old-k8s-version-20211020000023-6808" rescaled to 1
	I1020 00:08:48.067192  219746 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}
	I1020 00:08:48.069058  219746 out.go:176] * Verifying Kubernetes components...
	I1020 00:08:48.069118  219746 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
	I1020 00:08:48.067244  219746 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1020 00:08:48.067257  219746 addons.go:404] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I1020 00:08:48.069217  219746 addons.go:65] Setting storage-provisioner=true in profile "old-k8s-version-20211020000023-6808"
	I1020 00:08:48.069233  219746 addons.go:65] Setting default-storageclass=true in profile "old-k8s-version-20211020000023-6808"
	I1020 00:08:48.069241  219746 addons.go:153] Setting addon storage-provisioner=true in "old-k8s-version-20211020000023-6808"
	I1020 00:08:48.069245  219746 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-20211020000023-6808"
	W1020 00:08:48.069249  219746 addons.go:165] addon storage-provisioner should already be in state true
	I1020 00:08:48.069282  219746 host.go:66] Checking if "old-k8s-version-20211020000023-6808" exists ...
	I1020 00:08:48.069287  219746 addons.go:65] Setting metrics-server=true in profile "old-k8s-version-20211020000023-6808"
	I1020 00:08:48.069313  219746 addons.go:153] Setting addon metrics-server=true in "old-k8s-version-20211020000023-6808"
	W1020 00:08:48.069327  219746 addons.go:165] addon metrics-server should already be in state true
	I1020 00:08:48.069358  219746 host.go:66] Checking if "old-k8s-version-20211020000023-6808" exists ...
	I1020 00:08:48.069223  219746 addons.go:65] Setting dashboard=true in profile "old-k8s-version-20211020000023-6808"
	I1020 00:08:48.069440  219746 addons.go:153] Setting addon dashboard=true in "old-k8s-version-20211020000023-6808"
	W1020 00:08:48.069451  219746 addons.go:165] addon dashboard should already be in state true
	I1020 00:08:48.069470  219746 host.go:66] Checking if "old-k8s-version-20211020000023-6808" exists ...
	I1020 00:08:48.067458  219746 config.go:176] Loaded profile config "old-k8s-version-20211020000023-6808": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.14.0
	I1020 00:08:48.069580  219746 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211020000023-6808 --format={{.State.Status}}
	I1020 00:08:48.069750  219746 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211020000023-6808 --format={{.State.Status}}
	I1020 00:08:48.069791  219746 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211020000023-6808 --format={{.State.Status}}
	I1020 00:08:48.069927  219746 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211020000023-6808 --format={{.State.Status}}
	I1020 00:08:48.141777  219746 out.go:176]   - Using image kubernetesui/dashboard:v2.3.1
	I1020 00:08:48.143331  219746 out.go:176]   - Using image k8s.gcr.io/echoserver:1.4
	I1020 00:08:48.143404  219746 addons.go:337] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1020 00:08:48.143413  219746 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1020 00:08:48.143463  219746 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211020000023-6808
	I1020 00:08:48.146691  219746 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1020 00:08:44.711630  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:46.712138  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:49.211204  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:48.148100  219746 out.go:176]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I1020 00:08:48.148157  219746 addons.go:337] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1020 00:08:48.148166  219746 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I1020 00:08:48.146814  219746 addons.go:337] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 00:08:48.148215  219746 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211020000023-6808
	I1020 00:08:48.148223  219746 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1020 00:08:48.148255  219746 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211020000023-6808
	I1020 00:08:48.155782  219746 addons.go:153] Setting addon default-storageclass=true in "old-k8s-version-20211020000023-6808"
	W1020 00:08:48.155825  219746 addons.go:165] addon default-storageclass should already be in state true
	I1020 00:08:48.155854  219746 host.go:66] Checking if "old-k8s-version-20211020000023-6808" exists ...
	I1020 00:08:48.156265  219746 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211020000023-6808 --format={{.State.Status}}
	I1020 00:08:48.195107  219746 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-20211020000023-6808" to be "Ready" ...
	I1020 00:08:48.195261  219746 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1020 00:08:48.203014  219746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33010 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/old-k8s-version-20211020000023-6808/id_rsa Username:docker}
	I1020 00:08:48.207510  219746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33010 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/old-k8s-version-20211020000023-6808/id_rsa Username:docker}
	I1020 00:08:48.214268  219746 node_ready.go:49] node "old-k8s-version-20211020000023-6808" has status "Ready":"True"
	I1020 00:08:48.214284  219746 node_ready.go:38] duration metric: took 19.145693ms waiting for node "old-k8s-version-20211020000023-6808" to be "Ready" ...
	I1020 00:08:48.214295  219746 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1020 00:08:48.217880  219746 pod_ready.go:78] waiting up to 6m0s for pod "coredns-fb8b8dccf-cv2xz" in "kube-system" namespace to be "Ready" ...
	I1020 00:08:48.231944  219746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33010 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/old-k8s-version-20211020000023-6808/id_rsa Username:docker}
	I1020 00:08:48.232011  219746 addons.go:337] installing /etc/kubernetes/addons/storageclass.yaml
	I1020 00:08:48.232029  219746 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1020 00:08:48.232093  219746 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211020000023-6808
	I1020 00:08:48.279410  219746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33010 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/old-k8s-version-20211020000023-6808/id_rsa Username:docker}
	I1020 00:08:48.327921  219746 addons.go:337] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1020 00:08:48.327954  219746 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1020 00:08:48.333409  219746 addons.go:337] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1020 00:08:48.333434  219746 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I1020 00:08:48.429149  219746 addons.go:337] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1020 00:08:48.429179  219746 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1020 00:08:48.429303  219746 addons.go:337] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1020 00:08:48.429317  219746 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I1020 00:08:48.434642  219746 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 00:08:48.444421  219746 addons.go:337] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1020 00:08:48.444445  219746 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1020 00:08:48.446726  219746 addons.go:337] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1020 00:08:48.446747  219746 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I1020 00:08:48.530753  219746 addons.go:337] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1020 00:08:48.530778  219746 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I1020 00:08:48.532601  219746 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1020 00:08:48.534196  219746 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1020 00:08:48.613922  219746 addons.go:337] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1020 00:08:48.613950  219746 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1020 00:08:48.630377  219746 addons.go:337] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1020 00:08:48.630443  219746 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1020 00:08:48.720749  219746 addons.go:337] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1020 00:08:48.720781  219746 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1020 00:08:48.729776  219746 start.go:729] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
	I1020 00:08:48.744123  219746 addons.go:337] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1020 00:08:48.744149  219746 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1020 00:08:48.831342  219746 addons.go:337] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1020 00:08:48.831370  219746 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1020 00:08:48.917932  219746 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1020 00:08:49.532853  219746 addons.go:375] Verifying addon metrics-server=true in "old-k8s-version-20211020000023-6808"
	I1020 00:08:49.953921  219746 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.035938266s)
	I1020 00:08:49.956053  219746 out.go:176] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1020 00:08:49.956081  219746 addons.go:406] enableAddons completed in 1.888832208s
	I1020 00:08:46.846330  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:48.847352  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:51.211265  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:53.710696  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:50.238063  219746 pod_ready.go:102] pod "coredns-fb8b8dccf-cv2xz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:52.737706  219746 pod_ready.go:102] pod "coredns-fb8b8dccf-cv2xz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:51.347107  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:53.847545  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:55.711569  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:58.211255  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:55.237490  219746 pod_ready.go:102] pod "coredns-fb8b8dccf-cv2xz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:57.237725  219746 pod_ready.go:102] pod "coredns-fb8b8dccf-cv2xz" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:57.737885  219746 pod_ready.go:92] pod "coredns-fb8b8dccf-cv2xz" in "kube-system" namespace has status "Ready":"True"
	I1020 00:08:57.737909  219746 pod_ready.go:81] duration metric: took 9.520008645s waiting for pod "coredns-fb8b8dccf-cv2xz" in "kube-system" namespace to be "Ready" ...
	I1020 00:08:57.737921  219746 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wnrrx" in "kube-system" namespace to be "Ready" ...
	I1020 00:08:57.741506  219746 pod_ready.go:92] pod "kube-proxy-wnrrx" in "kube-system" namespace has status "Ready":"True"
	I1020 00:08:57.741522  219746 pod_ready.go:81] duration metric: took 3.596211ms waiting for pod "kube-proxy-wnrrx" in "kube-system" namespace to be "Ready" ...
	I1020 00:08:57.741530  219746 pod_ready.go:38] duration metric: took 9.527217072s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1020 00:08:57.741549  219746 api_server.go:51] waiting for apiserver process to appear ...
	I1020 00:08:57.741591  219746 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 00:08:57.763388  219746 api_server.go:71] duration metric: took 9.69617489s to wait for apiserver process to appear ...
	I1020 00:08:57.763410  219746 api_server.go:87] waiting for apiserver healthz status ...
	I1020 00:08:57.763421  219746 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1020 00:08:57.767906  219746 api_server.go:266] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1020 00:08:57.768752  219746 api_server.go:140] control plane version: v1.14.0
	I1020 00:08:57.768773  219746 api_server.go:130] duration metric: took 5.356522ms to wait for apiserver health ...
	I1020 00:08:57.768780  219746 system_pods.go:43] waiting for kube-system pods to appear ...
	I1020 00:08:57.771689  219746 system_pods.go:59] 5 kube-system pods found
	I1020 00:08:57.771710  219746 system_pods.go:61] "coredns-fb8b8dccf-cv2xz" [e55f825b-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:08:57.771714  219746 system_pods.go:61] "kindnet-d4px8" [e5c73929-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:08:57.771717  219746 system_pods.go:61] "kube-proxy-wnrrx" [e5c793fd-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:08:57.771725  219746 system_pods.go:61] "metrics-server-8546d8b77b-gg7n5" [e73e3666-3139-11ec-9318-0242c0a84c02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 00:08:57.771729  219746 system_pods.go:61] "storage-provisioner" [e689c982-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:08:57.771764  219746 system_pods.go:74] duration metric: took 2.979011ms to wait for pod list to return data ...
	I1020 00:08:57.771770  219746 default_sa.go:34] waiting for default service account to be created ...
	I1020 00:08:57.773916  219746 default_sa.go:45] found service account: "default"
	I1020 00:08:57.773933  219746 default_sa.go:55] duration metric: took 2.159599ms for default service account to be created ...
	I1020 00:08:57.773939  219746 system_pods.go:116] waiting for k8s-apps to be running ...
	I1020 00:08:57.776457  219746 system_pods.go:86] 5 kube-system pods found
	I1020 00:08:57.776476  219746 system_pods.go:89] "coredns-fb8b8dccf-cv2xz" [e55f825b-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:08:57.776482  219746 system_pods.go:89] "kindnet-d4px8" [e5c73929-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:08:57.776485  219746 system_pods.go:89] "kube-proxy-wnrrx" [e5c793fd-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:08:57.776492  219746 system_pods.go:89] "metrics-server-8546d8b77b-gg7n5" [e73e3666-3139-11ec-9318-0242c0a84c02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 00:08:57.776496  219746 system_pods.go:89] "storage-provisioner" [e689c982-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:08:57.776513  219746 retry.go:31] will retry after 227.257272ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1020 00:08:58.007180  219746 system_pods.go:86] 5 kube-system pods found
	I1020 00:08:58.007205  219746 system_pods.go:89] "coredns-fb8b8dccf-cv2xz" [e55f825b-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:08:58.007211  219746 system_pods.go:89] "kindnet-d4px8" [e5c73929-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:08:58.007215  219746 system_pods.go:89] "kube-proxy-wnrrx" [e5c793fd-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:08:58.007221  219746 system_pods.go:89] "metrics-server-8546d8b77b-gg7n5" [e73e3666-3139-11ec-9318-0242c0a84c02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 00:08:58.007227  219746 system_pods.go:89] "storage-provisioner" [e689c982-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:08:58.007240  219746 retry.go:31] will retry after 307.639038ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1020 00:08:58.319324  219746 system_pods.go:86] 5 kube-system pods found
	I1020 00:08:58.319353  219746 system_pods.go:89] "coredns-fb8b8dccf-cv2xz" [e55f825b-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:08:58.319358  219746 system_pods.go:89] "kindnet-d4px8" [e5c73929-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:08:58.319362  219746 system_pods.go:89] "kube-proxy-wnrrx" [e5c793fd-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:08:58.319369  219746 system_pods.go:89] "metrics-server-8546d8b77b-gg7n5" [e73e3666-3139-11ec-9318-0242c0a84c02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 00:08:58.319373  219746 system_pods.go:89] "storage-provisioner" [e689c982-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:08:58.319387  219746 retry.go:31] will retry after 348.248857ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1020 00:08:58.671300  219746 system_pods.go:86] 5 kube-system pods found
	I1020 00:08:58.671342  219746 system_pods.go:89] "coredns-fb8b8dccf-cv2xz" [e55f825b-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:08:58.671348  219746 system_pods.go:89] "kindnet-d4px8" [e5c73929-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:08:58.671352  219746 system_pods.go:89] "kube-proxy-wnrrx" [e5c793fd-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:08:58.671359  219746 system_pods.go:89] "metrics-server-8546d8b77b-gg7n5" [e73e3666-3139-11ec-9318-0242c0a84c02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 00:08:58.671363  219746 system_pods.go:89] "storage-provisioner" [e689c982-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:08:58.671378  219746 retry.go:31] will retry after 437.769008ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1020 00:08:59.113399  219746 system_pods.go:86] 5 kube-system pods found
	I1020 00:08:59.113430  219746 system_pods.go:89] "coredns-fb8b8dccf-cv2xz" [e55f825b-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:08:59.113437  219746 system_pods.go:89] "kindnet-d4px8" [e5c73929-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:08:59.113441  219746 system_pods.go:89] "kube-proxy-wnrrx" [e5c793fd-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:08:59.113448  219746 system_pods.go:89] "metrics-server-8546d8b77b-gg7n5" [e73e3666-3139-11ec-9318-0242c0a84c02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 00:08:59.113453  219746 system_pods.go:89] "storage-provisioner" [e689c982-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:08:59.113468  219746 retry.go:31] will retry after 665.003868ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1020 00:08:59.782509  219746 system_pods.go:86] 5 kube-system pods found
	I1020 00:08:59.782537  219746 system_pods.go:89] "coredns-fb8b8dccf-cv2xz" [e55f825b-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:08:59.782543  219746 system_pods.go:89] "kindnet-d4px8" [e5c73929-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:08:59.782547  219746 system_pods.go:89] "kube-proxy-wnrrx" [e5c793fd-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:08:59.782554  219746 system_pods.go:89] "metrics-server-8546d8b77b-gg7n5" [e73e3666-3139-11ec-9318-0242c0a84c02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 00:08:59.782559  219746 system_pods.go:89] "storage-provisioner" [e689c982-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:08:59.782573  219746 retry.go:31] will retry after 655.575962ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1020 00:08:56.346022  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:08:58.346418  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:00.845764  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:00.213065  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:02.711628  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:00.442014  219746 system_pods.go:86] 5 kube-system pods found
	I1020 00:09:00.442039  219746 system_pods.go:89] "coredns-fb8b8dccf-cv2xz" [e55f825b-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:00.442044  219746 system_pods.go:89] "kindnet-d4px8" [e5c73929-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:00.442049  219746 system_pods.go:89] "kube-proxy-wnrrx" [e5c793fd-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:00.442056  219746 system_pods.go:89] "metrics-server-8546d8b77b-gg7n5" [e73e3666-3139-11ec-9318-0242c0a84c02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 00:09:00.442060  219746 system_pods.go:89] "storage-provisioner" [e689c982-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:00.442074  219746 retry.go:31] will retry after 812.142789ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1020 00:09:01.259542  219746 system_pods.go:86] 5 kube-system pods found
	I1020 00:09:01.259569  219746 system_pods.go:89] "coredns-fb8b8dccf-cv2xz" [e55f825b-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:01.259574  219746 system_pods.go:89] "kindnet-d4px8" [e5c73929-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:01.259578  219746 system_pods.go:89] "kube-proxy-wnrrx" [e5c793fd-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:01.259585  219746 system_pods.go:89] "metrics-server-8546d8b77b-gg7n5" [e73e3666-3139-11ec-9318-0242c0a84c02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 00:09:01.259592  219746 system_pods.go:89] "storage-provisioner" [e689c982-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:01.259607  219746 retry.go:31] will retry after 1.109165795s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1020 00:09:02.372295  219746 system_pods.go:86] 5 kube-system pods found
	I1020 00:09:02.372331  219746 system_pods.go:89] "coredns-fb8b8dccf-cv2xz" [e55f825b-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:02.372338  219746 system_pods.go:89] "kindnet-d4px8" [e5c73929-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:02.372343  219746 system_pods.go:89] "kube-proxy-wnrrx" [e5c793fd-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:02.372351  219746 system_pods.go:89] "metrics-server-8546d8b77b-gg7n5" [e73e3666-3139-11ec-9318-0242c0a84c02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 00:09:02.372358  219746 system_pods.go:89] "storage-provisioner" [e689c982-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:02.372376  219746 retry.go:31] will retry after 1.54277181s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1020 00:09:03.919081  219746 system_pods.go:86] 5 kube-system pods found
	I1020 00:09:03.919109  219746 system_pods.go:89] "coredns-fb8b8dccf-cv2xz" [e55f825b-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:03.919116  219746 system_pods.go:89] "kindnet-d4px8" [e5c73929-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:03.919121  219746 system_pods.go:89] "kube-proxy-wnrrx" [e5c793fd-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:03.919130  219746 system_pods.go:89] "metrics-server-8546d8b77b-gg7n5" [e73e3666-3139-11ec-9318-0242c0a84c02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 00:09:03.919137  219746 system_pods.go:89] "storage-provisioner" [e689c982-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:03.919159  219746 retry.go:31] will retry after 2.200241603s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1020 00:09:03.347031  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:05.845922  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:05.210946  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:07.710832  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:06.123880  219746 system_pods.go:86] 5 kube-system pods found
	I1020 00:09:06.123907  219746 system_pods.go:89] "coredns-fb8b8dccf-cv2xz" [e55f825b-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:06.123912  219746 system_pods.go:89] "kindnet-d4px8" [e5c73929-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:06.123916  219746 system_pods.go:89] "kube-proxy-wnrrx" [e5c793fd-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:06.123923  219746 system_pods.go:89] "metrics-server-8546d8b77b-gg7n5" [e73e3666-3139-11ec-9318-0242c0a84c02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 00:09:06.123928  219746 system_pods.go:89] "storage-provisioner" [e689c982-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:06.123945  219746 retry.go:31] will retry after 2.087459713s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1020 00:09:08.214961  219746 system_pods.go:86] 5 kube-system pods found
	I1020 00:09:08.214986  219746 system_pods.go:89] "coredns-fb8b8dccf-cv2xz" [e55f825b-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:08.214991  219746 system_pods.go:89] "kindnet-d4px8" [e5c73929-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:08.214995  219746 system_pods.go:89] "kube-proxy-wnrrx" [e5c793fd-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:08.215002  219746 system_pods.go:89] "metrics-server-8546d8b77b-gg7n5" [e73e3666-3139-11ec-9318-0242c0a84c02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 00:09:08.215006  219746 system_pods.go:89] "storage-provisioner" [e689c982-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:08.215021  219746 retry.go:31] will retry after 2.615099305s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1020 00:09:07.846260  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:10.396130  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:10.211117  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:12.211422  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:10.834352  219746 system_pods.go:86] 5 kube-system pods found
	I1020 00:09:10.834384  219746 system_pods.go:89] "coredns-fb8b8dccf-cv2xz" [e55f825b-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:10.834391  219746 system_pods.go:89] "kindnet-d4px8" [e5c73929-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:10.834398  219746 system_pods.go:89] "kube-proxy-wnrrx" [e5c793fd-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:10.834407  219746 system_pods.go:89] "metrics-server-8546d8b77b-gg7n5" [e73e3666-3139-11ec-9318-0242c0a84c02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 00:09:10.834414  219746 system_pods.go:89] "storage-provisioner" [e689c982-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:10.834436  219746 retry.go:31] will retry after 4.097406471s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1020 00:09:14.936447  219746 system_pods.go:86] 5 kube-system pods found
	I1020 00:09:14.936470  219746 system_pods.go:89] "coredns-fb8b8dccf-cv2xz" [e55f825b-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:14.936475  219746 system_pods.go:89] "kindnet-d4px8" [e5c73929-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:14.936479  219746 system_pods.go:89] "kube-proxy-wnrrx" [e5c793fd-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:14.936485  219746 system_pods.go:89] "metrics-server-8546d8b77b-gg7n5" [e73e3666-3139-11ec-9318-0242c0a84c02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 00:09:14.936490  219746 system_pods.go:89] "storage-provisioner" [e689c982-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:14.936505  219746 retry.go:31] will retry after 3.880319712s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1020 00:09:12.845655  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:14.846612  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:14.711730  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:17.211626  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:18.823278  219746 system_pods.go:86] 5 kube-system pods found
	I1020 00:09:18.823303  219746 system_pods.go:89] "coredns-fb8b8dccf-cv2xz" [e55f825b-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:18.823308  219746 system_pods.go:89] "kindnet-d4px8" [e5c73929-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:18.823312  219746 system_pods.go:89] "kube-proxy-wnrrx" [e5c793fd-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:18.823321  219746 system_pods.go:89] "metrics-server-8546d8b77b-gg7n5" [e73e3666-3139-11ec-9318-0242c0a84c02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 00:09:18.823325  219746 system_pods.go:89] "storage-provisioner" [e689c982-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:18.823340  219746 retry.go:31] will retry after 6.722686426s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1020 00:09:16.846639  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:19.345981  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:19.711337  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:22.211286  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:24.211657  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:21.846122  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:23.846646  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:25.848464  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:26.711508  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:29.211288  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:25.549748  219746 system_pods.go:86] 5 kube-system pods found
	I1020 00:09:25.549776  219746 system_pods.go:89] "coredns-fb8b8dccf-cv2xz" [e55f825b-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:25.549781  219746 system_pods.go:89] "kindnet-d4px8" [e5c73929-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:25.549785  219746 system_pods.go:89] "kube-proxy-wnrrx" [e5c793fd-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:25.549792  219746 system_pods.go:89] "metrics-server-8546d8b77b-gg7n5" [e73e3666-3139-11ec-9318-0242c0a84c02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 00:09:25.549797  219746 system_pods.go:89] "storage-provisioner" [e689c982-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:25.549812  219746 retry.go:31] will retry after 7.804314206s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1020 00:09:28.346283  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:30.845629  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:31.710942  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:34.212171  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:33.358144  219746 system_pods.go:86] 5 kube-system pods found
	I1020 00:09:33.358169  219746 system_pods.go:89] "coredns-fb8b8dccf-cv2xz" [e55f825b-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:33.358175  219746 system_pods.go:89] "kindnet-d4px8" [e5c73929-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:33.358179  219746 system_pods.go:89] "kube-proxy-wnrrx" [e5c793fd-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:33.358186  219746 system_pods.go:89] "metrics-server-8546d8b77b-gg7n5" [e73e3666-3139-11ec-9318-0242c0a84c02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 00:09:33.358192  219746 system_pods.go:89] "storage-provisioner" [e689c982-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:33.358210  219746 retry.go:31] will retry after 8.98756758s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1020 00:09:32.846280  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:35.346072  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:36.711106  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:38.711680  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:37.346103  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:39.346525  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:41.211241  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:43.710261  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:42.350689  219746 system_pods.go:86] 7 kube-system pods found
	I1020 00:09:42.350716  219746 system_pods.go:89] "coredns-fb8b8dccf-cv2xz" [e55f825b-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:42.350721  219746 system_pods.go:89] "kindnet-d4px8" [e5c73929-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:42.350726  219746 system_pods.go:89] "kube-controller-manager-old-k8s-version-20211020000023-6808" [01bf202a-313a-11ec-9318-0242c0a84c02] Pending
	I1020 00:09:42.350729  219746 system_pods.go:89] "kube-proxy-wnrrx" [e5c793fd-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:42.350733  219746 system_pods.go:89] "kube-scheduler-old-k8s-version-20211020000023-6808" [038886fe-313a-11ec-9318-0242c0a84c02] Pending
	I1020 00:09:42.350741  219746 system_pods.go:89] "metrics-server-8546d8b77b-gg7n5" [e73e3666-3139-11ec-9318-0242c0a84c02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 00:09:42.350759  219746 system_pods.go:89] "storage-provisioner" [e689c982-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:42.350774  219746 retry.go:31] will retry after 8.483786333s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1020 00:09:41.846596  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:44.345879  234720 pod_ready.go:102] pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:45.341698  234720 pod_ready.go:81] duration metric: took 4m0.385310648s waiting for pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace to be "Ready" ...
	E1020 00:09:45.341722  234720 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-nggb2" in "kube-system" namespace to be "Ready" (will not retry!)
	I1020 00:09:45.341744  234720 pod_ready.go:38] duration metric: took 4m13.983069507s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1020 00:09:45.341772  234720 kubeadm.go:604] restartCluster took 4m31.116400212s
	W1020 00:09:45.341897  234720 out.go:241] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1020 00:09:45.341923  234720 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1020 00:09:48.648780  234720 ssh_runner.go:192] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (3.306837167s)
	I1020 00:09:48.648848  234720 ssh_runner.go:152] Run: sudo systemctl stop -f kubelet
	I1020 00:09:48.659189  234720 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1020 00:09:48.659255  234720 ssh_runner.go:152] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 00:09:48.682898  234720 cri.go:76] found id: ""
	I1020 00:09:48.682953  234720 ssh_runner.go:152] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1020 00:09:48.689781  234720 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I1020 00:09:48.689847  234720 ssh_runner.go:152] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1020 00:09:48.696235  234720 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1020 00:09:48.696275  234720 ssh_runner.go:243] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1020 00:09:45.713715  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:48.211644  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:48.959025  234720 out.go:203]   - Generating certificates and keys ...
	I1020 00:09:49.766256  234720 out.go:203]   - Booting up control plane ...
	I1020 00:09:50.710920  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:52.711147  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:50.839010  219746 system_pods.go:86] 8 kube-system pods found
	I1020 00:09:50.839036  219746 system_pods.go:89] "coredns-fb8b8dccf-cv2xz" [e55f825b-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:50.839041  219746 system_pods.go:89] "kindnet-d4px8" [e5c73929-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:50.839045  219746 system_pods.go:89] "kube-apiserver-old-k8s-version-20211020000023-6808" [071c1402-313a-11ec-9318-0242c0a84c02] Pending
	I1020 00:09:50.839049  219746 system_pods.go:89] "kube-controller-manager-old-k8s-version-20211020000023-6808" [01bf202a-313a-11ec-9318-0242c0a84c02] Running
	I1020 00:09:50.839053  219746 system_pods.go:89] "kube-proxy-wnrrx" [e5c793fd-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:50.839057  219746 system_pods.go:89] "kube-scheduler-old-k8s-version-20211020000023-6808" [038886fe-313a-11ec-9318-0242c0a84c02] Running
	I1020 00:09:50.839064  219746 system_pods.go:89] "metrics-server-8546d8b77b-gg7n5" [e73e3666-3139-11ec-9318-0242c0a84c02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 00:09:50.839068  219746 system_pods.go:89] "storage-provisioner" [e689c982-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:09:50.839083  219746 retry.go:31] will retry after 11.506963942s: missing components: etcd, kube-apiserver
	I1020 00:09:54.711542  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:09:57.211505  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:10:02.334254  234720 out.go:203]   - Configuring RBAC rules ...
	I1020 00:10:02.747781  234720 cni.go:93] Creating CNI manager for ""
	I1020 00:10:02.747804  234720 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1020 00:09:59.711870  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:10:02.211571  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:10:04.211770  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:10:02.351168  219746 system_pods.go:86] 9 kube-system pods found
	I1020 00:10:02.351202  219746 system_pods.go:89] "coredns-fb8b8dccf-cv2xz" [e55f825b-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:10:02.351211  219746 system_pods.go:89] "etcd-old-k8s-version-20211020000023-6808" [0daacb3c-313a-11ec-9318-0242c0a84c02] Pending
	I1020 00:10:02.351217  219746 system_pods.go:89] "kindnet-d4px8" [e5c73929-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:10:02.351224  219746 system_pods.go:89] "kube-apiserver-old-k8s-version-20211020000023-6808" [071c1402-313a-11ec-9318-0242c0a84c02] Running
	I1020 00:10:02.351232  219746 system_pods.go:89] "kube-controller-manager-old-k8s-version-20211020000023-6808" [01bf202a-313a-11ec-9318-0242c0a84c02] Running
	I1020 00:10:02.351238  219746 system_pods.go:89] "kube-proxy-wnrrx" [e5c793fd-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:10:02.351245  219746 system_pods.go:89] "kube-scheduler-old-k8s-version-20211020000023-6808" [038886fe-313a-11ec-9318-0242c0a84c02] Running
	I1020 00:10:02.351261  219746 system_pods.go:89] "metrics-server-8546d8b77b-gg7n5" [e73e3666-3139-11ec-9318-0242c0a84c02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 00:10:02.351271  219746 system_pods.go:89] "storage-provisioner" [e689c982-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:10:02.351289  219746 retry.go:31] will retry after 18.277510002s: missing components: etcd
	I1020 00:10:02.749516  234720 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I1020 00:10:02.749566  234720 ssh_runner.go:152] Run: stat /opt/cni/bin/portmap
	I1020 00:10:02.753044  234720 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.22.2/kubectl ...
	I1020 00:10:02.753063  234720 ssh_runner.go:319] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I1020 00:10:02.765379  234720 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1020 00:10:02.919931  234720 ssh_runner.go:152] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1020 00:10:02.919985  234720 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl label nodes minikube.k8s.io/version=v1.23.2 minikube.k8s.io/commit=4e20255d8fb01d11f0b16be976bdc5b5661029f6 minikube.k8s.io/name=default-k8s-different-port-20211020000323-6808 minikube.k8s.io/updated_at=2021_10_20T00_10_02_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:10:02.919986  234720 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:10:02.935306  234720 ops.go:34] apiserver oom_adj: -16
	I1020 00:10:03.021467  234720 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:10:03.578712  234720 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:10:04.079054  234720 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:10:04.579437  234720 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:10:05.078582  234720 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:10:05.578722  234720 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:10:06.078723  234720 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:10:06.711006  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:10:09.211032  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:10:06.578539  234720 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:10:07.079219  234720 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:10:07.579268  234720 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:10:08.078686  234720 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:10:08.579083  234720 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:10:09.078510  234720 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:10:09.578920  234720 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:10:10.078933  234720 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:10:10.579323  234720 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:10:11.079521  234720 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:10:11.711335  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:10:14.211886  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:10:11.578763  234720 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:10:12.079233  234720 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:10:12.578496  234720 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:10:13.079313  234720 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:10:13.578672  234720 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:10:14.078688  234720 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:10:14.579315  234720 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:10:15.078552  234720 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:10:15.578465  234720 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:10:16.078518  234720 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:10:16.138441  234720 kubeadm.go:985] duration metric: took 13.218517339s to wait for elevateKubeSystemPrivileges.
	I1020 00:10:16.138470  234720 kubeadm.go:392] StartCluster complete in 5m1.959184868s
	I1020 00:10:16.138486  234720 settings.go:142] acquiring lock: {Name:mkae5b2371003bb83973886295163ea24b8b77a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 00:10:16.138564  234720 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/kubeconfig
	I1020 00:10:16.139606  234720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/kubeconfig: {Name:mk397e230cae60dd4f11434c0167f3868cd3be72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 00:10:16.658197  234720 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20211020000323-6808" rescaled to 1
	I1020 00:10:16.658247  234720 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.22.2 ControlPlane:true Worker:true}
	I1020 00:10:16.660424  234720 out.go:176] * Verifying Kubernetes components...
	I1020 00:10:16.660472  234720 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
	I1020 00:10:16.658307  234720 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1020 00:10:16.658320  234720 addons.go:404] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I1020 00:10:16.660599  234720 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20211020000323-6808"
	I1020 00:10:16.660617  234720 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20211020000323-6808"
	I1020 00:10:16.660618  234720 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20211020000323-6808"
	I1020 00:10:16.660628  234720 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20211020000323-6808"
	I1020 00:10:16.660635  234720 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20211020000323-6808"
	I1020 00:10:16.660636  234720 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20211020000323-6808"
	W1020 00:10:16.660643  234720 addons.go:165] addon dashboard should already be in state true
	I1020 00:10:16.660672  234720 host.go:66] Checking if "default-k8s-different-port-20211020000323-6808" exists ...
	W1020 00:10:16.660622  234720 addons.go:165] addon storage-provisioner should already be in state true
	I1020 00:10:16.660706  234720 host.go:66] Checking if "default-k8s-different-port-20211020000323-6808" exists ...
	I1020 00:10:16.658474  234720 config.go:176] Loaded profile config "default-k8s-different-port-20211020000323-6808": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.2
	I1020 00:10:16.660676  234720 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20211020000323-6808"
	I1020 00:10:16.660846  234720 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20211020000323-6808"
	W1020 00:10:16.660856  234720 addons.go:165] addon metrics-server should already be in state true
	I1020 00:10:16.660881  234720 host.go:66] Checking if "default-k8s-different-port-20211020000323-6808" exists ...
	I1020 00:10:16.660979  234720 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211020000323-6808 --format={{.State.Status}}
	I1020 00:10:16.661146  234720 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211020000323-6808 --format={{.State.Status}}
	I1020 00:10:16.661162  234720 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211020000323-6808 --format={{.State.Status}}
	I1020 00:10:16.661289  234720 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211020000323-6808 --format={{.State.Status}}
	I1020 00:10:16.724367  234720 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20211020000323-6808"
	W1020 00:10:16.724394  234720 addons.go:165] addon default-storageclass should already be in state true
	I1020 00:10:16.724422  234720 host.go:66] Checking if "default-k8s-different-port-20211020000323-6808" exists ...
	I1020 00:10:16.724910  234720 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211020000323-6808 --format={{.State.Status}}
	I1020 00:10:16.727583  234720 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1020 00:10:16.729827  234720 out.go:176]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I1020 00:10:16.727698  234720 addons.go:337] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 00:10:16.729874  234720 addons.go:337] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1020 00:10:16.729878  234720 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1020 00:10:16.729883  234720 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I1020 00:10:16.731406  234720 out.go:176]   - Using image kubernetesui/dashboard:v2.3.1
	I1020 00:10:16.729928  234720 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211020000323-6808
	I1020 00:10:16.729933  234720 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211020000323-6808
	I1020 00:10:16.713586  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:10:19.210895  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:10:16.733093  234720 out.go:176]   - Using image k8s.gcr.io/echoserver:1.4
	I1020 00:10:16.733162  234720 addons.go:337] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1020 00:10:16.733176  234720 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1020 00:10:16.733230  234720 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211020000323-6808
	I1020 00:10:16.738519  234720 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20211020000323-6808" to be "Ready" ...
	I1020 00:10:16.738946  234720 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.22.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1020 00:10:16.741498  234720 node_ready.go:49] node "default-k8s-different-port-20211020000323-6808" has status "Ready":"True"
	I1020 00:10:16.741523  234720 node_ready.go:38] duration metric: took 2.977399ms waiting for node "default-k8s-different-port-20211020000323-6808" to be "Ready" ...
	I1020 00:10:16.741534  234720 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1020 00:10:16.753083  234720 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-dh86x" in "kube-system" namespace to be "Ready" ...
	I1020 00:10:16.793382  234720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33025 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/default-k8s-different-port-20211020000323-6808/id_rsa Username:docker}
	I1020 00:10:16.794997  234720 addons.go:337] installing /etc/kubernetes/addons/storageclass.yaml
	I1020 00:10:16.795019  234720 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1020 00:10:16.795071  234720 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211020000323-6808
	I1020 00:10:16.795310  234720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33025 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/default-k8s-different-port-20211020000323-6808/id_rsa Username:docker}
	I1020 00:10:16.795316  234720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33025 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/default-k8s-different-port-20211020000323-6808/id_rsa Username:docker}
	I1020 00:10:16.856577  234720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33025 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/default-k8s-different-port-20211020000323-6808/id_rsa Username:docker}
	I1020 00:10:16.926352  234720 addons.go:337] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1020 00:10:16.926378  234720 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I1020 00:10:16.927216  234720 addons.go:337] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1020 00:10:16.927234  234720 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1020 00:10:16.927252  234720 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 00:10:16.939653  234720 addons.go:337] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1020 00:10:16.939675  234720 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I1020 00:10:16.942519  234720 addons.go:337] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1020 00:10:16.942535  234720 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1020 00:10:16.954050  234720 addons.go:337] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1020 00:10:16.954070  234720 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I1020 00:10:17.023914  234720 addons.go:337] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1020 00:10:17.023996  234720 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1020 00:10:17.027519  234720 start.go:729] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I1020 00:10:17.112522  234720 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1020 00:10:17.114129  234720 addons.go:337] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1020 00:10:17.114181  234720 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I1020 00:10:17.119679  234720 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1020 00:10:17.138032  234720 addons.go:337] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1020 00:10:17.138054  234720 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1020 00:10:17.235797  234720 addons.go:337] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1020 00:10:17.235834  234720 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1020 00:10:17.319900  234720 addons.go:337] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1020 00:10:17.319926  234720 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1020 00:10:17.413422  234720 addons.go:337] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1020 00:10:17.413452  234720 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1020 00:10:17.513251  234720 addons.go:337] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1020 00:10:17.513281  234720 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1020 00:10:17.613735  234720 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1020 00:10:18.232785  234720 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.113068494s)
	I1020 00:10:18.232821  234720 addons.go:375] Verifying addon metrics-server=true in "default-k8s-different-port-20211020000323-6808"
	I1020 00:10:18.824654  234720 pod_ready.go:102] pod "coredns-78fcd69978-dh86x" in "kube-system" namespace has status "Ready":"False"
	I1020 00:10:19.422477  234720 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.808672424s)
	I1020 00:10:20.635136  219746 system_pods.go:86] 9 kube-system pods found
	I1020 00:10:20.635173  219746 system_pods.go:89] "coredns-fb8b8dccf-cv2xz" [e55f825b-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:10:20.635182  219746 system_pods.go:89] "etcd-old-k8s-version-20211020000023-6808" [0daacb3c-313a-11ec-9318-0242c0a84c02] Running
	I1020 00:10:20.635194  219746 system_pods.go:89] "kindnet-d4px8" [e5c73929-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:10:20.635201  219746 system_pods.go:89] "kube-apiserver-old-k8s-version-20211020000023-6808" [071c1402-313a-11ec-9318-0242c0a84c02] Running
	I1020 00:10:20.635208  219746 system_pods.go:89] "kube-controller-manager-old-k8s-version-20211020000023-6808" [01bf202a-313a-11ec-9318-0242c0a84c02] Running
	I1020 00:10:20.635215  219746 system_pods.go:89] "kube-proxy-wnrrx" [e5c793fd-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:10:20.635226  219746 system_pods.go:89] "kube-scheduler-old-k8s-version-20211020000023-6808" [038886fe-313a-11ec-9318-0242c0a84c02] Running
	I1020 00:10:20.635242  219746 system_pods.go:89] "metrics-server-8546d8b77b-gg7n5" [e73e3666-3139-11ec-9318-0242c0a84c02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 00:10:20.635249  219746 system_pods.go:89] "storage-provisioner" [e689c982-3139-11ec-9318-0242c0a84c02] Running
	I1020 00:10:20.635260  219746 system_pods.go:126] duration metric: took 1m22.861315984s to wait for k8s-apps to be running ...
	I1020 00:10:20.635275  219746 system_svc.go:44] waiting for kubelet service to be running ....
	I1020 00:10:20.635335  219746 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
	I1020 00:10:20.647789  219746 system_svc.go:56] duration metric: took 12.50524ms WaitForService to wait for kubelet.
	I1020 00:10:20.647860  219746 kubeadm.go:547] duration metric: took 1m32.580603826s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1020 00:10:20.647886  219746 node_conditions.go:102] verifying NodePressure condition ...
	I1020 00:10:20.650510  219746 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I1020 00:10:20.650533  219746 node_conditions.go:123] node cpu capacity is 8
	I1020 00:10:20.650545  219746 node_conditions.go:105] duration metric: took 2.653157ms to run NodePressure ...
	I1020 00:10:20.650558  219746 start.go:231] waiting for startup goroutines ...
	I1020 00:10:20.696114  219746 start.go:462] kubectl: 1.20.5, cluster: 1.14.0 (minor skew: 6)
	I1020 00:10:20.698415  219746 out.go:176] 
	W1020 00:10:20.698573  219746 out.go:241] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.14.0.
	I1020 00:10:20.700020  219746 out.go:176]   - Want kubectl v1.14.0? Try 'minikube kubectl -- get pods -A'
	I1020 00:10:20.701497  219746 out.go:176] * Done! kubectl is now configured to use "old-k8s-version-20211020000023-6808" cluster and "default" namespace by default
	I1020 00:10:19.424459  234720 out.go:176] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1020 00:10:19.424490  234720 addons.go:406] enableAddons completed in 2.766176689s
	I1020 00:10:21.317262  234720 pod_ready.go:102] pod "coredns-78fcd69978-dh86x" in "kube-system" namespace has status "Ready":"False"
	I1020 00:10:22.267399  234720 pod_ready.go:92] pod "coredns-78fcd69978-dh86x" in "kube-system" namespace has status "Ready":"True"
	I1020 00:10:22.267422  234720 pod_ready.go:81] duration metric: took 5.51431187s waiting for pod "coredns-78fcd69978-dh86x" in "kube-system" namespace to be "Ready" ...
	I1020 00:10:22.267435  234720 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-tsrfv" in "kube-system" namespace to be "Ready" ...
	I1020 00:10:22.314604  234720 pod_ready.go:92] pod "coredns-78fcd69978-tsrfv" in "kube-system" namespace has status "Ready":"True"
	I1020 00:10:22.314627  234720 pod_ready.go:81] duration metric: took 47.185255ms waiting for pod "coredns-78fcd69978-tsrfv" in "kube-system" namespace to be "Ready" ...
	I1020 00:10:22.314640  234720 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-different-port-20211020000323-6808" in "kube-system" namespace to be "Ready" ...
	I1020 00:10:22.319204  234720 pod_ready.go:92] pod "etcd-default-k8s-different-port-20211020000323-6808" in "kube-system" namespace has status "Ready":"True"
	I1020 00:10:22.319226  234720 pod_ready.go:81] duration metric: took 4.576892ms waiting for pod "etcd-default-k8s-different-port-20211020000323-6808" in "kube-system" namespace to be "Ready" ...
	I1020 00:10:22.319240  234720 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-different-port-20211020000323-6808" in "kube-system" namespace to be "Ready" ...
	I1020 00:10:22.323789  234720 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20211020000323-6808" in "kube-system" namespace has status "Ready":"True"
	I1020 00:10:22.323861  234720 pod_ready.go:81] duration metric: took 4.610771ms waiting for pod "kube-apiserver-default-k8s-different-port-20211020000323-6808" in "kube-system" namespace to be "Ready" ...
	I1020 00:10:22.323875  234720 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-different-port-20211020000323-6808" in "kube-system" namespace to be "Ready" ...
	I1020 00:10:22.329767  234720 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20211020000323-6808" in "kube-system" namespace has status "Ready":"True"
	I1020 00:10:22.329784  234720 pod_ready.go:81] duration metric: took 5.900165ms waiting for pod "kube-controller-manager-default-k8s-different-port-20211020000323-6808" in "kube-system" namespace to be "Ready" ...
	I1020 00:10:22.329795  234720 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rknr2" in "kube-system" namespace to be "Ready" ...
	I1020 00:10:22.666474  234720 pod_ready.go:92] pod "kube-proxy-rknr2" in "kube-system" namespace has status "Ready":"True"
	I1020 00:10:22.666498  234720 pod_ready.go:81] duration metric: took 336.694675ms waiting for pod "kube-proxy-rknr2" in "kube-system" namespace to be "Ready" ...
	I1020 00:10:22.666510  234720 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-different-port-20211020000323-6808" in "kube-system" namespace to be "Ready" ...
	I1020 00:10:23.066455  234720 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20211020000323-6808" in "kube-system" namespace has status "Ready":"True"
	I1020 00:10:23.066480  234720 pod_ready.go:81] duration metric: took 399.960854ms waiting for pod "kube-scheduler-default-k8s-different-port-20211020000323-6808" in "kube-system" namespace to be "Ready" ...
	I1020 00:10:23.066491  234720 pod_ready.go:38] duration metric: took 6.324944414s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1020 00:10:23.066511  234720 api_server.go:51] waiting for apiserver process to appear ...
	I1020 00:10:23.066554  234720 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 00:10:23.132714  234720 api_server.go:71] duration metric: took 6.474440468s to wait for apiserver process to appear ...
	I1020 00:10:23.132742  234720 api_server.go:87] waiting for apiserver healthz status ...
	I1020 00:10:23.132753  234720 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I1020 00:10:23.138122  234720 api_server.go:266] https://192.168.49.2:8444/healthz returned 200:
	ok
	I1020 00:10:23.139100  234720 api_server.go:140] control plane version: v1.22.2
	I1020 00:10:23.139119  234720 api_server.go:130] duration metric: took 6.370574ms to wait for apiserver health ...
	I1020 00:10:23.139128  234720 system_pods.go:43] waiting for kube-system pods to appear ...
	I1020 00:10:23.269719  234720 system_pods.go:59] 10 kube-system pods found
	I1020 00:10:23.269753  234720 system_pods.go:61] "coredns-78fcd69978-dh86x" [188f6133-ab49-46ad-be91-146ffd93d595] Running
	I1020 00:10:23.269761  234720 system_pods.go:61] "coredns-78fcd69978-tsrfv" [2add0bd9-74d3-45a5-bd11-ef5d70cf45d2] Running
	I1020 00:10:23.269774  234720 system_pods.go:61] "etcd-default-k8s-different-port-20211020000323-6808" [dbcab5b3-3105-4393-9d37-4552c66f5505] Running
	I1020 00:10:23.269780  234720 system_pods.go:61] "kindnet-6gmwx" [2b142d48-e636-4759-86c9-40668c5baf19] Running
	I1020 00:10:23.269786  234720 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20211020000323-6808" [b1577e5b-0e7d-4374-b227-c92ed774560e] Running
	I1020 00:10:23.269792  234720 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20211020000323-6808" [29bd374c-0660-412b-8dd7-d7c475101169] Running
	I1020 00:10:23.269798  234720 system_pods.go:61] "kube-proxy-rknr2" [f68b99a7-f5e6-42cc-8e61-c92d5b5aedaf] Running
	I1020 00:10:23.269805  234720 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20211020000323-6808" [e25b65d2-f014-460b-bb89-e17c0688bf81] Running
	I1020 00:10:23.269815  234720 system_pods.go:61] "metrics-server-7c784ccb57-26598" [efda9d90-3647-45ea-8851-1a0f6d98d293] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 00:10:23.269822  234720 system_pods.go:61] "storage-provisioner" [4ae5bfc6-73b5-4869-934c-6faf5678a670] Running
	I1020 00:10:23.269829  234720 system_pods.go:74] duration metric: took 130.695337ms to wait for pod list to return data ...
	I1020 00:10:23.269840  234720 default_sa.go:34] waiting for default service account to be created ...
	I1020 00:10:23.466521  234720 default_sa.go:45] found service account: "default"
	I1020 00:10:23.466544  234720 default_sa.go:55] duration metric: took 196.697429ms for default service account to be created ...
	I1020 00:10:23.466553  234720 system_pods.go:116] waiting for k8s-apps to be running ...
	I1020 00:10:23.668458  234720 system_pods.go:86] 10 kube-system pods found
	I1020 00:10:23.668483  234720 system_pods.go:89] "coredns-78fcd69978-dh86x" [188f6133-ab49-46ad-be91-146ffd93d595] Running
	I1020 00:10:23.668489  234720 system_pods.go:89] "coredns-78fcd69978-tsrfv" [2add0bd9-74d3-45a5-bd11-ef5d70cf45d2] Running
	I1020 00:10:23.668494  234720 system_pods.go:89] "etcd-default-k8s-different-port-20211020000323-6808" [dbcab5b3-3105-4393-9d37-4552c66f5505] Running
	I1020 00:10:23.668500  234720 system_pods.go:89] "kindnet-6gmwx" [2b142d48-e636-4759-86c9-40668c5baf19] Running
	I1020 00:10:23.668507  234720 system_pods.go:89] "kube-apiserver-default-k8s-different-port-20211020000323-6808" [b1577e5b-0e7d-4374-b227-c92ed774560e] Running
	I1020 00:10:23.668515  234720 system_pods.go:89] "kube-controller-manager-default-k8s-different-port-20211020000323-6808" [29bd374c-0660-412b-8dd7-d7c475101169] Running
	I1020 00:10:23.668522  234720 system_pods.go:89] "kube-proxy-rknr2" [f68b99a7-f5e6-42cc-8e61-c92d5b5aedaf] Running
	I1020 00:10:23.668528  234720 system_pods.go:89] "kube-scheduler-default-k8s-different-port-20211020000323-6808" [e25b65d2-f014-460b-bb89-e17c0688bf81] Running
	I1020 00:10:23.668538  234720 system_pods.go:89] "metrics-server-7c784ccb57-26598" [efda9d90-3647-45ea-8851-1a0f6d98d293] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 00:10:23.668549  234720 system_pods.go:89] "storage-provisioner" [4ae5bfc6-73b5-4869-934c-6faf5678a670] Running
	I1020 00:10:23.668558  234720 system_pods.go:126] duration metric: took 201.99918ms to wait for k8s-apps to be running ...
	I1020 00:10:23.668570  234720 system_svc.go:44] waiting for kubelet service to be running ....
	I1020 00:10:23.668613  234720 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
	I1020 00:10:23.679089  234720 system_svc.go:56] duration metric: took 10.514134ms WaitForService to wait for kubelet.
	I1020 00:10:23.679111  234720 kubeadm.go:547] duration metric: took 7.020842249s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1020 00:10:23.679136  234720 node_conditions.go:102] verifying NodePressure condition ...
	I1020 00:10:23.866793  234720 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I1020 00:10:23.866821  234720 node_conditions.go:123] node cpu capacity is 8
	I1020 00:10:23.866834  234720 node_conditions.go:105] duration metric: took 187.69259ms to run NodePressure ...
	I1020 00:10:23.866847  234720 start.go:231] waiting for startup goroutines ...
	I1020 00:10:23.912895  234720 start.go:462] kubectl: 1.20.5, cluster: 1.22.2 (minor skew: 2)
	I1020 00:10:23.915208  234720 out.go:176] 
	W1020 00:10:23.915412  234720 out.go:241] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.2.
	I1020 00:10:23.917035  234720 out.go:176]   - Want kubectl v1.22.2? Try 'minikube kubectl -- get pods -A'
	I1020 00:10:23.918582  234720 out.go:176] * Done! kubectl is now configured to use "default-k8s-different-port-20211020000323-6808" cluster and "default" namespace by default
	I1020 00:10:21.710877  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:10:23.769719  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:10:26.210694  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:10:28.211097  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:10:30.711547  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:10:32.711934  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:10:34.712546  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:10:37.211912  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:10:39.213345  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:10:41.710961  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:10:44.213511  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:10:46.711643  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:10:49.210819  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:10:51.211641  240589 pod_ready.go:102] pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace has status "Ready":"False"
	I1020 00:10:53.706588  240589 pod_ready.go:81] duration metric: took 4m0.06039874s waiting for pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace to be "Ready" ...
	E1020 00:10:53.706614  240589 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-2df4s" in "kube-system" namespace to be "Ready" (will not retry!)
	I1020 00:10:53.706638  240589 pod_ready.go:38] duration metric: took 4m9.616806273s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1020 00:10:53.706671  240589 kubeadm.go:604] restartCluster took 4m25.96065055s
	W1020 00:10:53.706790  240589 out.go:241] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1020 00:10:53.706815  240589 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1020 00:10:56.932077  240589 ssh_runner.go:192] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (3.225243695s)
	I1020 00:10:56.932134  240589 ssh_runner.go:152] Run: sudo systemctl stop -f kubelet
	I1020 00:10:56.941684  240589 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1020 00:10:56.941744  240589 ssh_runner.go:152] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 00:10:56.965507  240589 cri.go:76] found id: ""
	I1020 00:10:56.965564  240589 ssh_runner.go:152] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1020 00:10:56.972272  240589 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I1020 00:10:56.972325  240589 ssh_runner.go:152] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1020 00:10:56.978549  240589 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1020 00:10:56.978589  240589 ssh_runner.go:243] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1020 00:10:57.242241  240589 out.go:203]   - Generating certificates and keys ...
	I1020 00:10:57.984541  240589 out.go:203]   - Booting up control plane ...
	I1020 00:11:11.033196  240589 out.go:203]   - Configuring RBAC rules ...
	I1020 00:11:11.446085  240589 cni.go:93] Creating CNI manager for ""
	I1020 00:11:11.446108  240589 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1020 00:11:11.447896  240589 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I1020 00:11:11.447949  240589 ssh_runner.go:152] Run: stat /opt/cni/bin/portmap
	I1020 00:11:11.451383  240589 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.22.2/kubectl ...
	I1020 00:11:11.451398  240589 ssh_runner.go:319] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I1020 00:11:11.463512  240589 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1020 00:11:11.616347  240589 ssh_runner.go:152] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1020 00:11:11.616420  240589 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:11:11.616447  240589 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl label nodes minikube.k8s.io/version=v1.23.2 minikube.k8s.io/commit=4e20255d8fb01d11f0b16be976bdc5b5661029f6 minikube.k8s.io/name=embed-certs-20211020000119-6808 minikube.k8s.io/updated_at=2021_10_20T00_11_11_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:11:11.712450  240589 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:11:11.723496  240589 ops.go:34] apiserver oom_adj: -16
	I1020 00:11:12.267918  240589 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:11:12.767542  240589 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:11:13.268227  240589 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:11:13.767698  240589 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:11:14.267839  240589 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:11:14.768115  240589 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:11:15.267377  240589 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:11:15.767937  240589 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:11:16.267770  240589 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:11:16.767903  240589 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:11:17.267999  240589 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:11:17.768119  240589 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:11:18.267990  240589 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:11:18.767646  240589 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:11:19.267940  240589 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:11:19.768229  240589 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:11:20.267617  240589 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:11:20.767990  240589 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:11:21.267747  240589 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:11:21.768005  240589 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:11:22.267449  240589 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:11:22.767983  240589 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:11:23.268138  240589 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:11:23.767976  240589 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 00:11:23.820916  240589 kubeadm.go:985] duration metric: took 12.204555651s to wait for elevateKubeSystemPrivileges.
	I1020 00:11:23.820946  240589 kubeadm.go:392] StartCluster complete in 4m56.117845004s
	I1020 00:11:23.820962  240589 settings.go:142] acquiring lock: {Name:mkae5b2371003bb83973886295163ea24b8b77a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 00:11:23.821068  240589 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/kubeconfig
	I1020 00:11:23.821853  240589 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/kubeconfig: {Name:mk397e230cae60dd4f11434c0167f3868cd3be72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 00:11:24.339414  240589 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20211020000119-6808" rescaled to 1
	I1020 00:11:24.339477  240589 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.22.2 ControlPlane:true Worker:true}
	I1020 00:11:24.340998  240589 out.go:176] * Verifying Kubernetes components...
	I1020 00:11:24.341064  240589 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
	I1020 00:11:24.339526  240589 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1020 00:11:24.339559  240589 addons.go:404] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I1020 00:11:24.339714  240589 config.go:176] Loaded profile config "embed-certs-20211020000119-6808": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.2
	I1020 00:11:24.341180  240589 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-20211020000119-6808"
	I1020 00:11:24.341195  240589 addons.go:65] Setting metrics-server=true in profile "embed-certs-20211020000119-6808"
	I1020 00:11:24.341200  240589 addons.go:65] Setting default-storageclass=true in profile "embed-certs-20211020000119-6808"
	I1020 00:11:24.341208  240589 addons.go:153] Setting addon metrics-server=true in "embed-certs-20211020000119-6808"
	W1020 00:11:24.341217  240589 addons.go:165] addon metrics-server should already be in state true
	I1020 00:11:24.341221  240589 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20211020000119-6808"
	I1020 00:11:24.341253  240589 host.go:66] Checking if "embed-certs-20211020000119-6808" exists ...
	I1020 00:11:24.341206  240589 addons.go:153] Setting addon storage-provisioner=true in "embed-certs-20211020000119-6808"
	W1020 00:11:24.341305  240589 addons.go:165] addon storage-provisioner should already be in state true
	I1020 00:11:24.341353  240589 host.go:66] Checking if "embed-certs-20211020000119-6808" exists ...
	I1020 00:11:24.341180  240589 addons.go:65] Setting dashboard=true in profile "embed-certs-20211020000119-6808"
	I1020 00:11:24.341446  240589 addons.go:153] Setting addon dashboard=true in "embed-certs-20211020000119-6808"
	W1020 00:11:24.341455  240589 addons.go:165] addon dashboard should already be in state true
	I1020 00:11:24.341487  240589 host.go:66] Checking if "embed-certs-20211020000119-6808" exists ...
	I1020 00:11:24.341570  240589 cli_runner.go:115] Run: docker container inspect embed-certs-20211020000119-6808 --format={{.State.Status}}
	I1020 00:11:24.341742  240589 cli_runner.go:115] Run: docker container inspect embed-certs-20211020000119-6808 --format={{.State.Status}}
	I1020 00:11:24.341843  240589 cli_runner.go:115] Run: docker container inspect embed-certs-20211020000119-6808 --format={{.State.Status}}
	I1020 00:11:24.341909  240589 cli_runner.go:115] Run: docker container inspect embed-certs-20211020000119-6808 --format={{.State.Status}}
	I1020 00:11:24.397619  240589 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1020 00:11:24.397754  240589 addons.go:337] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 00:11:24.397772  240589 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1020 00:11:24.399019  240589 out.go:176]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I1020 00:11:24.399067  240589 addons.go:337] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1020 00:11:24.399080  240589 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I1020 00:11:24.397823  240589 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211020000119-6808
	I1020 00:11:24.399127  240589 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211020000119-6808
	I1020 00:11:24.402479  240589 out.go:176]   - Using image k8s.gcr.io/echoserver:1.4
	I1020 00:11:24.403896  240589 out.go:176]   - Using image kubernetesui/dashboard:v2.3.1
	I1020 00:11:24.403951  240589 addons.go:337] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1020 00:11:24.403964  240589 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1020 00:11:24.404017  240589 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211020000119-6808
	I1020 00:11:24.415577  240589 addons.go:153] Setting addon default-storageclass=true in "embed-certs-20211020000119-6808"
	W1020 00:11:24.415605  240589 addons.go:165] addon default-storageclass should already be in state true
	I1020 00:11:24.415634  240589 host.go:66] Checking if "embed-certs-20211020000119-6808" exists ...
	I1020 00:11:24.416150  240589 cli_runner.go:115] Run: docker container inspect embed-certs-20211020000119-6808 --format={{.State.Status}}
	I1020 00:11:24.456366  240589 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33030 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/embed-certs-20211020000119-6808/id_rsa Username:docker}
	I1020 00:11:24.457639  240589 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33030 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/embed-certs-20211020000119-6808/id_rsa Username:docker}
	I1020 00:11:24.470756  240589 addons.go:337] installing /etc/kubernetes/addons/storageclass.yaml
	I1020 00:11:24.470776  240589 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1020 00:11:24.470929  240589 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211020000119-6808
	I1020 00:11:24.473391  240589 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33030 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/embed-certs-20211020000119-6808/id_rsa Username:docker}
	I1020 00:11:24.476692  240589 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20211020000119-6808" to be "Ready" ...
	I1020 00:11:24.476775  240589 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.22.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1020 00:11:24.484753  240589 node_ready.go:49] node "embed-certs-20211020000119-6808" has status "Ready":"True"
	I1020 00:11:24.484767  240589 node_ready.go:38] duration metric: took 8.046413ms waiting for node "embed-certs-20211020000119-6808" to be "Ready" ...
	I1020 00:11:24.484781  240589 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1020 00:11:24.511656  240589 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33030 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/embed-certs-20211020000119-6808/id_rsa Username:docker}
	I1020 00:11:24.519087  240589 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-6pnp8" in "kube-system" namespace to be "Ready" ...
	I1020 00:11:24.630109  240589 addons.go:337] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1020 00:11:24.630138  240589 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I1020 00:11:24.631417  240589 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 00:11:24.634155  240589 addons.go:337] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1020 00:11:24.634182  240589 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1020 00:11:24.635309  240589 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1020 00:11:24.727020  240589 addons.go:337] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1020 00:11:24.727051  240589 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I1020 00:11:24.729801  240589 addons.go:337] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1020 00:11:24.729824  240589 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1020 00:11:24.818606  240589 addons.go:337] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1020 00:11:24.818634  240589 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1020 00:11:24.824604  240589 addons.go:337] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1020 00:11:24.824631  240589 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I1020 00:11:24.912894  240589 addons.go:337] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1020 00:11:24.912919  240589 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I1020 00:11:24.925375  240589 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1020 00:11:24.935739  240589 addons.go:337] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1020 00:11:24.935867  240589 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1020 00:11:25.117528  240589 addons.go:337] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1020 00:11:25.117563  240589 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1020 00:11:25.127010  240589 start.go:729] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS
	I1020 00:11:25.214327  240589 addons.go:337] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1020 00:11:25.214408  240589 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1020 00:11:25.327640  240589 addons.go:337] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1020 00:11:25.327677  240589 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1020 00:11:25.426937  240589 addons.go:337] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1020 00:11:25.426971  240589 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1020 00:11:25.512430  240589 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1020 00:11:26.216662  240589 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.291244163s)
	I1020 00:11:26.216701  240589 addons.go:375] Verifying addon metrics-server=true in "embed-certs-20211020000119-6808"
	I1020 00:11:26.626154  240589 pod_ready.go:102] pod "coredns-78fcd69978-6pnp8" in "kube-system" namespace has status "Ready":"False"
	I1020 00:11:26.935722  240589 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.423190206s)
	I1020 00:11:26.937481  240589 out.go:176] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1020 00:11:26.937509  240589 addons.go:406] enableAddons completed in 2.597963961s
	I1020 00:11:29.030588  240589 pod_ready.go:102] pod "coredns-78fcd69978-6pnp8" in "kube-system" namespace has status "Ready":"False"
	I1020 00:11:31.530821  240589 pod_ready.go:102] pod "coredns-78fcd69978-6pnp8" in "kube-system" namespace has status "Ready":"False"
	I1020 00:11:34.030653  240589 pod_ready.go:102] pod "coredns-78fcd69978-6pnp8" in "kube-system" namespace has status "Ready":"False"
	I1020 00:11:36.530059  240589 pod_ready.go:102] pod "coredns-78fcd69978-6pnp8" in "kube-system" namespace has status "Ready":"False"
	I1020 00:11:38.028158  240589 pod_ready.go:97] error getting pod "coredns-78fcd69978-6pnp8" in "kube-system" namespace (skipping!): pods "coredns-78fcd69978-6pnp8" not found
	I1020 00:11:38.028189  240589 pod_ready.go:81] duration metric: took 13.509042653s waiting for pod "coredns-78fcd69978-6pnp8" in "kube-system" namespace to be "Ready" ...
	E1020 00:11:38.028199  240589 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-78fcd69978-6pnp8" in "kube-system" namespace (skipping!): pods "coredns-78fcd69978-6pnp8" not found
	I1020 00:11:38.028205  240589 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-dz9xq" in "kube-system" namespace to be "Ready" ...
	I1020 00:11:38.032342  240589 pod_ready.go:92] pod "coredns-78fcd69978-dz9xq" in "kube-system" namespace has status "Ready":"True"
	I1020 00:11:38.032358  240589 pod_ready.go:81] duration metric: took 4.148783ms waiting for pod "coredns-78fcd69978-dz9xq" in "kube-system" namespace to be "Ready" ...
	I1020 00:11:38.032368  240589 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-20211020000119-6808" in "kube-system" namespace to be "Ready" ...
	I1020 00:11:38.036197  240589 pod_ready.go:92] pod "etcd-embed-certs-20211020000119-6808" in "kube-system" namespace has status "Ready":"True"
	I1020 00:11:38.036212  240589 pod_ready.go:81] duration metric: took 3.838164ms waiting for pod "etcd-embed-certs-20211020000119-6808" in "kube-system" namespace to be "Ready" ...
	I1020 00:11:38.036225  240589 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-20211020000119-6808" in "kube-system" namespace to be "Ready" ...
	I1020 00:11:38.039912  240589 pod_ready.go:92] pod "kube-apiserver-embed-certs-20211020000119-6808" in "kube-system" namespace has status "Ready":"True"
	I1020 00:11:38.039926  240589 pod_ready.go:81] duration metric: took 3.695119ms waiting for pod "kube-apiserver-embed-certs-20211020000119-6808" in "kube-system" namespace to be "Ready" ...
	I1020 00:11:38.039935  240589 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-20211020000119-6808" in "kube-system" namespace to be "Ready" ...
	I1020 00:11:38.043596  240589 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20211020000119-6808" in "kube-system" namespace has status "Ready":"True"
	I1020 00:11:38.043609  240589 pod_ready.go:81] duration metric: took 3.669613ms waiting for pod "kube-controller-manager-embed-certs-20211020000119-6808" in "kube-system" namespace to be "Ready" ...
	I1020 00:11:38.043617  240589 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-74h96" in "kube-system" namespace to be "Ready" ...
	I1020 00:11:38.228457  240589 pod_ready.go:92] pod "kube-proxy-74h96" in "kube-system" namespace has status "Ready":"True"
	I1020 00:11:38.228477  240589 pod_ready.go:81] duration metric: took 184.854795ms waiting for pod "kube-proxy-74h96" in "kube-system" namespace to be "Ready" ...
	I1020 00:11:38.228486  240589 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-20211020000119-6808" in "kube-system" namespace to be "Ready" ...
	I1020 00:11:38.628782  240589 pod_ready.go:92] pod "kube-scheduler-embed-certs-20211020000119-6808" in "kube-system" namespace has status "Ready":"True"
	I1020 00:11:38.628805  240589 pod_ready.go:81] duration metric: took 400.312861ms waiting for pod "kube-scheduler-embed-certs-20211020000119-6808" in "kube-system" namespace to be "Ready" ...
	I1020 00:11:38.628813  240589 pod_ready.go:38] duration metric: took 14.144023407s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1020 00:11:38.628829  240589 api_server.go:51] waiting for apiserver process to appear ...
	I1020 00:11:38.628873  240589 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 00:11:38.650528  240589 api_server.go:71] duration metric: took 14.311015391s to wait for apiserver process to appear ...
	I1020 00:11:38.650547  240589 api_server.go:87] waiting for apiserver healthz status ...
	I1020 00:11:38.650557  240589 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1020 00:11:38.654756  240589 api_server.go:266] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1020 00:11:38.655563  240589 api_server.go:140] control plane version: v1.22.2
	I1020 00:11:38.655581  240589 api_server.go:130] duration metric: took 5.028701ms to wait for apiserver health ...
	I1020 00:11:38.655588  240589 system_pods.go:43] waiting for kube-system pods to appear ...
	I1020 00:11:38.830888  240589 system_pods.go:59] 9 kube-system pods found
	I1020 00:11:38.830918  240589 system_pods.go:61] "coredns-78fcd69978-dz9xq" [f68a2f7c-693f-456d-afd6-3e45c7a1169c] Running
	I1020 00:11:38.830923  240589 system_pods.go:61] "etcd-embed-certs-20211020000119-6808" [b539f6cf-e0da-475a-ad06-cc380dded84a] Running
	I1020 00:11:38.830927  240589 system_pods.go:61] "kindnet-7g4v2" [2d354868-dd64-457b-ab45-716d8513332c] Running
	I1020 00:11:38.830931  240589 system_pods.go:61] "kube-apiserver-embed-certs-20211020000119-6808" [8ee6c0d9-7dd3-48a0-b215-2c15e47338ad] Running
	I1020 00:11:38.830935  240589 system_pods.go:61] "kube-controller-manager-embed-certs-20211020000119-6808" [f339ba15-c4a4-4598-9e6a-865465e70ef7] Running
	I1020 00:11:38.830938  240589 system_pods.go:61] "kube-proxy-74h96" [8ebc3de5-8c9a-460f-bd78-fd4c2cd7d52e] Running
	I1020 00:11:38.830942  240589 system_pods.go:61] "kube-scheduler-embed-certs-20211020000119-6808" [c8277352-5fd0-42f1-812c-f9d29cf44b07] Running
	I1020 00:11:38.830948  240589 system_pods.go:61] "metrics-server-7c784ccb57-4qx7n" [2d5a6732-f488-4534-a9dc-6d0e07139c47] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 00:11:38.830952  240589 system_pods.go:61] "storage-provisioner" [0ebfffac-6694-46da-adfb-99dfd775ad99] Running
	I1020 00:11:38.830958  240589 system_pods.go:74] duration metric: took 175.36582ms to wait for pod list to return data ...
	I1020 00:11:38.830965  240589 default_sa.go:34] waiting for default service account to be created ...
	I1020 00:11:39.029152  240589 default_sa.go:45] found service account: "default"
	I1020 00:11:39.029179  240589 default_sa.go:55] duration metric: took 198.208683ms for default service account to be created ...
	I1020 00:11:39.029190  240589 system_pods.go:116] waiting for k8s-apps to be running ...
	I1020 00:11:39.231552  240589 system_pods.go:86] 9 kube-system pods found
	I1020 00:11:39.231581  240589 system_pods.go:89] "coredns-78fcd69978-dz9xq" [f68a2f7c-693f-456d-afd6-3e45c7a1169c] Running
	I1020 00:11:39.231587  240589 system_pods.go:89] "etcd-embed-certs-20211020000119-6808" [b539f6cf-e0da-475a-ad06-cc380dded84a] Running
	I1020 00:11:39.231591  240589 system_pods.go:89] "kindnet-7g4v2" [2d354868-dd64-457b-ab45-716d8513332c] Running
	I1020 00:11:39.231596  240589 system_pods.go:89] "kube-apiserver-embed-certs-20211020000119-6808" [8ee6c0d9-7dd3-48a0-b215-2c15e47338ad] Running
	I1020 00:11:39.231600  240589 system_pods.go:89] "kube-controller-manager-embed-certs-20211020000119-6808" [f339ba15-c4a4-4598-9e6a-865465e70ef7] Running
	I1020 00:11:39.231604  240589 system_pods.go:89] "kube-proxy-74h96" [8ebc3de5-8c9a-460f-bd78-fd4c2cd7d52e] Running
	I1020 00:11:39.231607  240589 system_pods.go:89] "kube-scheduler-embed-certs-20211020000119-6808" [c8277352-5fd0-42f1-812c-f9d29cf44b07] Running
	I1020 00:11:39.231614  240589 system_pods.go:89] "metrics-server-7c784ccb57-4qx7n" [2d5a6732-f488-4534-a9dc-6d0e07139c47] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 00:11:39.231618  240589 system_pods.go:89] "storage-provisioner" [0ebfffac-6694-46da-adfb-99dfd775ad99] Running
	I1020 00:11:39.231626  240589 system_pods.go:126] duration metric: took 202.430272ms to wait for k8s-apps to be running ...
	I1020 00:11:39.231634  240589 system_svc.go:44] waiting for kubelet service to be running ....
	I1020 00:11:39.231674  240589 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
	I1020 00:11:39.240862  240589 system_svc.go:56] duration metric: took 9.2225ms WaitForService to wait for kubelet.
	I1020 00:11:39.240883  240589 kubeadm.go:547] duration metric: took 14.901374004s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1020 00:11:39.240909  240589 node_conditions.go:102] verifying NodePressure condition ...
	I1020 00:11:39.430550  240589 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I1020 00:11:39.430575  240589 node_conditions.go:123] node cpu capacity is 8
	I1020 00:11:39.430586  240589 node_conditions.go:105] duration metric: took 189.672341ms to run NodePressure ...
	I1020 00:11:39.430597  240589 start.go:231] waiting for startup goroutines ...
	I1020 00:11:39.473773  240589 start.go:462] kubectl: 1.20.5, cluster: 1.22.2 (minor skew: 2)
	I1020 00:11:39.475723  240589 out.go:176] 
	W1020 00:11:39.475933  240589 out.go:241] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.2.
	I1020 00:11:39.477366  240589 out.go:176]   - Want kubectl v1.22.2? Try 'minikube kubectl -- get pods -A'
	I1020 00:11:39.478867  240589 out.go:176] * Done! kubectl is now configured to use "embed-certs-20211020000119-6808" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID
	6bbb4c44400b6       523cad1a4df73       7 seconds ago       Exited              dashboard-metrics-scraper   2                   7c193a6ecff0b
	de8122a69ebdc       e1482a24335a6       27 seconds ago      Running             kubernetes-dashboard        0                   48854236a19a4
	509c6eb7c34d7       6e38f40d628db       28 seconds ago      Running             storage-provisioner         0                   4a43dcd260169
	585d9c2f0d022       8d147537fb7d1       29 seconds ago      Running             coredns                     0                   55fa9a2e203d0
	76d591cc47994       6de166512aa22       30 seconds ago      Running             kindnet-cni                 0                   b0d37b78efd72
	ae35e640ba60e       873127efbc8a7       30 seconds ago      Running             kube-proxy                  0                   b4bcdc2d8941d
	eaa50fd2ebc34       5425bcbd23c54       51 seconds ago      Running             kube-controller-manager     2                   d9d713a115bcd
	f4aa16b4a5bc5       b51ddc1014b04       51 seconds ago      Running             kube-scheduler              2                   8c8fc41fe704c
	e22dc3c6e30ea       0048118155842       51 seconds ago      Running             etcd                        2                   ddd960943a088
	cf3af14936006       e64579b7d8862       51 seconds ago      Running             kube-apiserver              2                   c48d211474888
	
	* 
	* ==> containerd <==
	* -- Logs begin at Wed 2021-10-20 00:06:11 UTC, end at Wed 2021-10-20 00:11:55 UTC. --
	Oct 20 00:11:36 embed-certs-20211020000119-6808 containerd[486]: time="2021-10-20T00:11:36.595140931Z" level=info msg="StopContainer for \"5b267fc588b14ba5760d1fa9658c03ce34eaae59a685d96ef491b6c4ba421bb7\" returns successfully"
	Oct 20 00:11:36 embed-certs-20211020000119-6808 containerd[486]: time="2021-10-20T00:11:36.595722660Z" level=info msg="StopPodSandbox for \"a9a0f99654b7bbfab309603df306b97416034a614438ba7d444c2ef1ad472f47\""
	Oct 20 00:11:36 embed-certs-20211020000119-6808 containerd[486]: time="2021-10-20T00:11:36.595791621Z" level=info msg="Container to stop \"5b267fc588b14ba5760d1fa9658c03ce34eaae59a685d96ef491b6c4ba421bb7\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Oct 20 00:11:36 embed-certs-20211020000119-6808 containerd[486]: time="2021-10-20T00:11:36.697206938Z" level=info msg="TaskExit event &TaskExit{ContainerID:a9a0f99654b7bbfab309603df306b97416034a614438ba7d444c2ef1ad472f47,ID:a9a0f99654b7bbfab309603df306b97416034a614438ba7d444c2ef1ad472f47,Pid:4466,ExitStatus:137,ExitedAt:2021-10-20 00:11:36.696982922 +0000 UTC,XXX_unrecognized:[],}"
	Oct 20 00:11:36 embed-certs-20211020000119-6808 containerd[486]: time="2021-10-20T00:11:36.740312931Z" level=info msg="shim disconnected" id=a9a0f99654b7bbfab309603df306b97416034a614438ba7d444c2ef1ad472f47
	Oct 20 00:11:36 embed-certs-20211020000119-6808 containerd[486]: time="2021-10-20T00:11:36.740545035Z" level=error msg="copy shim log" error="read /proc/self/fd/83: file already closed"
	Oct 20 00:11:36 embed-certs-20211020000119-6808 containerd[486]: time="2021-10-20T00:11:36.835895262Z" level=info msg="TearDown network for sandbox \"a9a0f99654b7bbfab309603df306b97416034a614438ba7d444c2ef1ad472f47\" successfully"
	Oct 20 00:11:36 embed-certs-20211020000119-6808 containerd[486]: time="2021-10-20T00:11:36.835929377Z" level=info msg="StopPodSandbox for \"a9a0f99654b7bbfab309603df306b97416034a614438ba7d444c2ef1ad472f47\" returns successfully"
	Oct 20 00:11:37 embed-certs-20211020000119-6808 containerd[486]: time="2021-10-20T00:11:37.566812143Z" level=info msg="RemoveContainer for \"5b267fc588b14ba5760d1fa9658c03ce34eaae59a685d96ef491b6c4ba421bb7\""
	Oct 20 00:11:37 embed-certs-20211020000119-6808 containerd[486]: time="2021-10-20T00:11:37.573111540Z" level=info msg="RemoveContainer for \"5b267fc588b14ba5760d1fa9658c03ce34eaae59a685d96ef491b6c4ba421bb7\" returns successfully"
	Oct 20 00:11:37 embed-certs-20211020000119-6808 containerd[486]: time="2021-10-20T00:11:37.573560989Z" level=error msg="ContainerStatus for \"5b267fc588b14ba5760d1fa9658c03ce34eaae59a685d96ef491b6c4ba421bb7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5b267fc588b14ba5760d1fa9658c03ce34eaae59a685d96ef491b6c4ba421bb7\": not found"
	Oct 20 00:11:42 embed-certs-20211020000119-6808 containerd[486]: time="2021-10-20T00:11:42.336655138Z" level=info msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Oct 20 00:11:42 embed-certs-20211020000119-6808 containerd[486]: time="2021-10-20T00:11:42.386563867Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host" host=fake.domain
	Oct 20 00:11:42 embed-certs-20211020000119-6808 containerd[486]: time="2021-10-20T00:11:42.387708078Z" level=error msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host"
	Oct 20 00:11:48 embed-certs-20211020000119-6808 containerd[486]: time="2021-10-20T00:11:48.337474342Z" level=info msg="CreateContainer within sandbox \"7c193a6ecff0b9420f709fa7fb5c9ef4b54c9a2b5aa3f2bbb13b722a8fa0093d\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:2,}"
	Oct 20 00:11:48 embed-certs-20211020000119-6808 containerd[486]: time="2021-10-20T00:11:48.381150461Z" level=info msg="CreateContainer within sandbox \"7c193a6ecff0b9420f709fa7fb5c9ef4b54c9a2b5aa3f2bbb13b722a8fa0093d\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:2,} returns container id \"6bbb4c44400b641cc5dd9085550804112419b62d39d659e692e54febb76d61f5\""
	Oct 20 00:11:48 embed-certs-20211020000119-6808 containerd[486]: time="2021-10-20T00:11:48.381659390Z" level=info msg="StartContainer for \"6bbb4c44400b641cc5dd9085550804112419b62d39d659e692e54febb76d61f5\""
	Oct 20 00:11:48 embed-certs-20211020000119-6808 containerd[486]: time="2021-10-20T00:11:48.518394785Z" level=info msg="StartContainer for \"6bbb4c44400b641cc5dd9085550804112419b62d39d659e692e54febb76d61f5\" returns successfully"
	Oct 20 00:11:48 embed-certs-20211020000119-6808 containerd[486]: time="2021-10-20T00:11:48.552319217Z" level=info msg="Finish piping stderr of container \"6bbb4c44400b641cc5dd9085550804112419b62d39d659e692e54febb76d61f5\""
	Oct 20 00:11:48 embed-certs-20211020000119-6808 containerd[486]: time="2021-10-20T00:11:48.552329840Z" level=info msg="Finish piping stdout of container \"6bbb4c44400b641cc5dd9085550804112419b62d39d659e692e54febb76d61f5\""
	Oct 20 00:11:48 embed-certs-20211020000119-6808 containerd[486]: time="2021-10-20T00:11:48.553212581Z" level=info msg="TaskExit event &TaskExit{ContainerID:6bbb4c44400b641cc5dd9085550804112419b62d39d659e692e54febb76d61f5,ID:6bbb4c44400b641cc5dd9085550804112419b62d39d659e692e54febb76d61f5,Pid:5499,ExitStatus:1,ExitedAt:2021-10-20 00:11:48.552958038 +0000 UTC,XXX_unrecognized:[],}"
	Oct 20 00:11:48 embed-certs-20211020000119-6808 containerd[486]: time="2021-10-20T00:11:48.596461537Z" level=info msg="shim disconnected" id=6bbb4c44400b641cc5dd9085550804112419b62d39d659e692e54febb76d61f5
	Oct 20 00:11:48 embed-certs-20211020000119-6808 containerd[486]: time="2021-10-20T00:11:48.596538867Z" level=error msg="copy shim log" error="read /proc/self/fd/99: file already closed"
	Oct 20 00:11:49 embed-certs-20211020000119-6808 containerd[486]: time="2021-10-20T00:11:49.593372371Z" level=info msg="RemoveContainer for \"4730553631cb302673c4a3870c1e964ff5341db998aad13dc0ebbe9e16b8c2f5\""
	Oct 20 00:11:49 embed-certs-20211020000119-6808 containerd[486]: time="2021-10-20T00:11:49.598505615Z" level=info msg="RemoveContainer for \"4730553631cb302673c4a3870c1e964ff5341db998aad13dc0ebbe9e16b8c2f5\" returns successfully"
	
	* 
	* ==> coredns [585d9c2f0d0225bfa0e8824b11ef251ead7f8ddd3f6239f107a6bb341d25a396] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.4
	linux/amd64, go1.16.4, 053c4d5
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20211020000119-6808
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-20211020000119-6808
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4e20255d8fb01d11f0b16be976bdc5b5661029f6
	                    minikube.k8s.io/name=embed-certs-20211020000119-6808
	                    minikube.k8s.io/updated_at=2021_10_20T00_11_11_0700
	                    minikube.k8s.io/version=v1.23.2
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 20 Oct 2021 00:11:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-20211020000119-6808
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 20 Oct 2021 00:11:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 20 Oct 2021 00:11:46 +0000   Wed, 20 Oct 2021 00:11:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 20 Oct 2021 00:11:46 +0000   Wed, 20 Oct 2021 00:11:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 20 Oct 2021 00:11:46 +0000   Wed, 20 Oct 2021 00:11:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 20 Oct 2021 00:11:46 +0000   Wed, 20 Oct 2021 00:11:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    embed-certs-20211020000119-6808
	Capacity:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	System Info:
	  Machine ID:                 bba0be70c47c400ea3cf7733f1c0b4c1
	  System UUID:                52bf727a-d689-4fdd-9046-032112131ad0
	  Boot ID:                    3860adfd-184a-4be3-9030-9440e8dafeac
	  Kernel Version:             4.9.0-16-amd64
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.4.9
	  Kubelet Version:            v1.22.2
	  Kube-Proxy Version:         v1.22.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-78fcd69978-dz9xq                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     31s
	  kube-system                 etcd-embed-certs-20211020000119-6808                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         39s
	  kube-system                 kindnet-7g4v2                                              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      32s
	  kube-system                 kube-apiserver-embed-certs-20211020000119-6808             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39s
	  kube-system                 kube-controller-manager-embed-certs-20211020000119-6808    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39s
	  kube-system                 kube-proxy-74h96                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         32s
	  kube-system                 kube-scheduler-embed-certs-20211020000119-6808             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39s
	  kube-system                 metrics-server-7c784ccb57-4qx7n                            100m (1%!)(MISSING)     0 (0%!)(MISSING)      300Mi (0%!)(MISSING)       0 (0%!)(MISSING)         30s
	  kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30s
	  kubernetes-dashboard        dashboard-metrics-scraper-8685c45546-lhdc7                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29s
	  kubernetes-dashboard        kubernetes-dashboard-654cf69797-wsz4n                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%!)(MISSING)  100m (1%!)(MISSING)
	  memory             520Mi (1%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From     Message
	  ----    ------                   ----               ----     -------
	  Normal  NodeHasSufficientMemory  52s (x5 over 52s)  kubelet  Node embed-certs-20211020000119-6808 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    52s (x4 over 52s)  kubelet  Node embed-certs-20211020000119-6808 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     52s (x3 over 52s)  kubelet  Node embed-certs-20211020000119-6808 status is now: NodeHasSufficientPID
	  Normal  Starting                 39s                kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  39s                kubelet  Node embed-certs-20211020000119-6808 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    39s                kubelet  Node embed-certs-20211020000119-6808 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     39s                kubelet  Node embed-certs-20211020000119-6808 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  39s                kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeReady                32s                kubelet  Node embed-certs-20211020000119-6808 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.000002] ll header: 00000000: ff ff ff ff ff ff 8a 3d a2 5d 77 67 08 06        .......=.]wg..
	[  +2.604405] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev vethfef873ab
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 72 b6 be 24 f6 05 08 06        ......r..$....
	[  +0.059770] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev veth4452a690
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff ee 46 1b 55 28 54 08 06        .......F.U(T..
	[  +0.747799] IPv4: martian source 10.244.0.9 from 10.244.0.9, on dev veth84a7eb03
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 62 ca a4 d8 86 c9 08 06        ......b.......
	[Oct20 00:10] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev veth3db2c6be
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 96 c2 98 0f a9 2c 08 06        ...........,..
	[  +0.019959] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev veth9bedf5aa
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 76 51 7c 28 fe 04 08 06        ......vQ|(....
	[  +1.699758] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev veth72feb492
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 16 53 98 81 6d 98 08 06        .......S..m...
	[  +0.536253] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev veth80b02c43
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff ba 5b ea d5 c4 e3 08 06        .......[......
	[  +0.439866] IPv4: martian source 10.244.0.9 from 10.244.0.9, on dev veth47588c15
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff ea 43 e7 2d d7 2c 08 06        .......C.-.,..
	[Oct20 00:11] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev vethd6396fc0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 62 48 1b bc 02 c7 08 06        ......bH......
	[  +1.654656] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev vethc38188b6
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 5a 80 82 19 9e b2 08 06        ......Z.......
	[  +0.684157] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev vethd9f4739c
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff ca c7 3a d0 58 2c 08 06        ........:.X,..
	[  +0.036119] IPv4: martian source 10.244.0.9 from 10.244.0.9, on dev veth21a64279
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 4a c9 07 fc a4 ec 08 06        ......J.......
	
	* 
	* ==> etcd [e22dc3c6e30ea67414ef6dfef449c8b17c52ed1cb7934e7050a03f9c5e7b82e1] <==
	* {"level":"info","ts":"2021-10-20T00:11:04.914Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2021-10-20T00:11:04.914Z","caller":"membership/cluster.go:393","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2021-10-20T00:11:04.915Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2021-10-20T00:11:04.915Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2021-10-20T00:11:04.915Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2021-10-20T00:11:04.915Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2021-10-20T00:11:04.915Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2021-10-20T00:11:05.342Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2021-10-20T00:11:05.342Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2021-10-20T00:11:05.342Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2021-10-20T00:11:05.342Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2021-10-20T00:11:05.342Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2021-10-20T00:11:05.342Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2021-10-20T00:11:05.342Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2021-10-20T00:11:05.342Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2021-10-20T00:11:05.343Z","caller":"membership/cluster.go:531","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2021-10-20T00:11:05.343Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2021-10-20T00:11:05.343Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2021-10-20T00:11:05.343Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:embed-certs-20211020000119-6808 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2021-10-20T00:11:05.343Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-10-20T00:11:05.343Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-10-20T00:11:05.343Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2021-10-20T00:11:05.343Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2021-10-20T00:11:05.345Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2021-10-20T00:11:05.345Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	
	* 
	* ==> kernel <==
	*  00:11:55 up  1:54,  0 users,  load average: 0.70, 1.95, 2.68
	Linux embed-certs-20211020000119-6808 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [cf3af14936006cd206aa6c1cbf4b87c4b824199cf7a2249808a7eb272135e6e2] <==
	* I1020 00:11:08.438901       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1020 00:11:08.438924       1 apf_controller.go:304] Running API Priority and Fairness config worker
	I1020 00:11:08.438931       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1020 00:11:08.439119       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I1020 00:11:08.512360       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I1020 00:11:09.337385       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1020 00:11:09.337406       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1020 00:11:09.343381       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I1020 00:11:09.346250       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I1020 00:11:09.346271       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I1020 00:11:09.718163       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1020 00:11:09.747029       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1020 00:11:09.840258       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I1020 00:11:09.841064       1 controller.go:611] quota admission added evaluator for: endpoints
	I1020 00:11:09.844408       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1020 00:11:10.436839       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1020 00:11:11.216689       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1020 00:11:11.246447       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1020 00:11:16.325127       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I1020 00:11:23.991570       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I1020 00:11:24.146482       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	W1020 00:11:28.512115       1 handler_proxy.go:104] no RequestInfo found in the context
	E1020 00:11:28.512203       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1020 00:11:28.512212       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [eaa50fd2ebc348b5c63113ef233fc29de40bfdb3879455d0faf400ba1bb683f1] <==
	* I1020 00:11:24.314577       1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-78fcd69978-6pnp8"
	I1020 00:11:25.729350       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-7c784ccb57 to 1"
	I1020 00:11:25.741952       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-7c784ccb57-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E1020 00:11:25.823272       1 replica_set.go:536] sync "kube-system/metrics-server-7c784ccb57" failed with pods "metrics-server-7c784ccb57-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I1020 00:11:25.929876       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-7c784ccb57-4qx7n"
	I1020 00:11:26.335970       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-8685c45546 to 1"
	I1020 00:11:26.419301       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E1020 00:11:26.425008       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1020 00:11:26.436593       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1020 00:11:26.438205       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I1020 00:11:26.516628       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-654cf69797 to 1"
	E1020 00:11:26.517921       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1020 00:11:26.518444       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E1020 00:11:26.614198       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1020 00:11:26.614667       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I1020 00:11:26.615684       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-654cf69797" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-654cf69797-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E1020 00:11:26.624203       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-654cf69797" failed with pods "kubernetes-dashboard-654cf69797-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1020 00:11:26.627703       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1020 00:11:26.627774       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E1020 00:11:26.631249       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-654cf69797" failed with pods "kubernetes-dashboard-654cf69797-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1020 00:11:26.631292       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-654cf69797" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-654cf69797-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I1020 00:11:26.640114       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-654cf69797" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-654cf69797-wsz4n"
	I1020 00:11:26.720369       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-8685c45546-lhdc7"
	E1020 00:11:53.505520       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1020 00:11:53.930014       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [ae35e640ba60e61409b32639933d3c690f1c17a23f76260188a6519b8b41ecc8] <==
	* I1020 00:11:25.329417       1 node.go:172] Successfully retrieved node IP: 192.168.58.2
	I1020 00:11:25.329468       1 server_others.go:140] Detected node IP 192.168.58.2
	W1020 00:11:25.329490       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
	I1020 00:11:25.529406       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I1020 00:11:25.529458       1 server_others.go:212] Using iptables Proxier.
	I1020 00:11:25.529472       1 server_others.go:219] creating dualStackProxier for iptables.
	W1020 00:11:25.529492       1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I1020 00:11:25.529902       1 server.go:649] Version: v1.22.2
	I1020 00:11:25.531095       1 config.go:224] Starting endpoint slice config controller
	I1020 00:11:25.531114       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I1020 00:11:25.533917       1 config.go:315] Starting service config controller
	I1020 00:11:25.533946       1 shared_informer.go:240] Waiting for caches to sync for service config
	E1020 00:11:25.537722       1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"embed-certs-20211020000119-6808.16af94b2d3655926", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc053f58b5f9ed1ef, ext:403417950, loc:(*time.Location)(0x2d81340)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-embed-certs-20211020000119-6808", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"", Nam
e:"embed-certs-20211020000119-6808", UID:"embed-certs-20211020000119-6808", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "embed-certs-20211020000119-6808.16af94b2d3655926" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
	I1020 00:11:25.631224       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I1020 00:11:25.634399       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [f4aa16b4a5bc58c0900b91a8a9cdee591f91d53c4d0037655694e918f2b05666] <==
	* I1020 00:11:08.435696       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1020 00:11:08.435988       1 secure_serving.go:200] Serving securely on 127.0.0.1:10259
	I1020 00:11:08.436074       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E1020 00:11:08.436979       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1020 00:11:08.438758       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1020 00:11:08.439077       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1020 00:11:08.439162       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1020 00:11:08.439232       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1020 00:11:08.439314       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1020 00:11:08.439382       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1020 00:11:08.439438       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1020 00:11:08.439499       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1020 00:11:08.439560       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1020 00:11:08.439778       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1020 00:11:08.439908       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1020 00:11:08.439973       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1020 00:11:08.440028       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1020 00:11:08.440112       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1020 00:11:09.319084       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1020 00:11:09.431835       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1020 00:11:09.484559       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1020 00:11:09.484665       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1020 00:11:09.520294       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1020 00:11:09.562854       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I1020 00:11:09.935944       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2021-10-20 00:06:11 UTC, end at Wed 2021-10-20 00:11:55 UTC. --
	Oct 20 00:11:33 embed-certs-20211020000119-6808 kubelet[3723]: E1020 00:11:33.553544    3723 pod_workers.go:765] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-lhdc7_kubernetes-dashboard(68ecd0f7-06c3-4470-a39c-90bbfdd9bc32)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-lhdc7" podUID=68ecd0f7-06c3-4470-a39c-90bbfdd9bc32
	Oct 20 00:11:34 embed-certs-20211020000119-6808 kubelet[3723]: I1020 00:11:34.556614    3723 scope.go:110] "RemoveContainer" containerID="4730553631cb302673c4a3870c1e964ff5341db998aad13dc0ebbe9e16b8c2f5"
	Oct 20 00:11:34 embed-certs-20211020000119-6808 kubelet[3723]: E1020 00:11:34.556986    3723 pod_workers.go:765] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-lhdc7_kubernetes-dashboard(68ecd0f7-06c3-4470-a39c-90bbfdd9bc32)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-lhdc7" podUID=68ecd0f7-06c3-4470-a39c-90bbfdd9bc32
	Oct 20 00:11:36 embed-certs-20211020000119-6808 kubelet[3723]: I1020 00:11:36.725803    3723 scope.go:110] "RemoveContainer" containerID="4730553631cb302673c4a3870c1e964ff5341db998aad13dc0ebbe9e16b8c2f5"
	Oct 20 00:11:36 embed-certs-20211020000119-6808 kubelet[3723]: E1020 00:11:36.726083    3723 pod_workers.go:765] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-lhdc7_kubernetes-dashboard(68ecd0f7-06c3-4470-a39c-90bbfdd9bc32)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-lhdc7" podUID=68ecd0f7-06c3-4470-a39c-90bbfdd9bc32
	Oct 20 00:11:36 embed-certs-20211020000119-6808 kubelet[3723]: I1020 00:11:36.961437    3723 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/edf2e828-f91c-4099-880f-04cced3cf065-config-volume\") pod \"edf2e828-f91c-4099-880f-04cced3cf065\" (UID: \"edf2e828-f91c-4099-880f-04cced3cf065\") "
	Oct 20 00:11:36 embed-certs-20211020000119-6808 kubelet[3723]: I1020 00:11:36.961502    3723 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v298c\" (UniqueName: \"kubernetes.io/projected/edf2e828-f91c-4099-880f-04cced3cf065-kube-api-access-v298c\") pod \"edf2e828-f91c-4099-880f-04cced3cf065\" (UID: \"edf2e828-f91c-4099-880f-04cced3cf065\") "
	Oct 20 00:11:36 embed-certs-20211020000119-6808 kubelet[3723]: W1020 00:11:36.961750    3723 empty_dir.go:517] Warning: Failed to clear quota on /var/lib/kubelet/pods/edf2e828-f91c-4099-880f-04cced3cf065/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Oct 20 00:11:36 embed-certs-20211020000119-6808 kubelet[3723]: I1020 00:11:36.961920    3723 operation_generator.go:866] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edf2e828-f91c-4099-880f-04cced3cf065-config-volume" (OuterVolumeSpecName: "config-volume") pod "edf2e828-f91c-4099-880f-04cced3cf065" (UID: "edf2e828-f91c-4099-880f-04cced3cf065"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Oct 20 00:11:36 embed-certs-20211020000119-6808 kubelet[3723]: I1020 00:11:36.988198    3723 operation_generator.go:866] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edf2e828-f91c-4099-880f-04cced3cf065-kube-api-access-v298c" (OuterVolumeSpecName: "kube-api-access-v298c") pod "edf2e828-f91c-4099-880f-04cced3cf065" (UID: "edf2e828-f91c-4099-880f-04cced3cf065"). InnerVolumeSpecName "kube-api-access-v298c". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 20 00:11:37 embed-certs-20211020000119-6808 kubelet[3723]: I1020 00:11:37.062358    3723 reconciler.go:319] "Volume detached for volume \"kube-api-access-v298c\" (UniqueName: \"kubernetes.io/projected/edf2e828-f91c-4099-880f-04cced3cf065-kube-api-access-v298c\") on node \"embed-certs-20211020000119-6808\" DevicePath \"\""
	Oct 20 00:11:37 embed-certs-20211020000119-6808 kubelet[3723]: I1020 00:11:37.062400    3723 reconciler.go:319] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/edf2e828-f91c-4099-880f-04cced3cf065-config-volume\") on node \"embed-certs-20211020000119-6808\" DevicePath \"\""
	Oct 20 00:11:37 embed-certs-20211020000119-6808 kubelet[3723]: I1020 00:11:37.565728    3723 scope.go:110] "RemoveContainer" containerID="5b267fc588b14ba5760d1fa9658c03ce34eaae59a685d96ef491b6c4ba421bb7"
	Oct 20 00:11:37 embed-certs-20211020000119-6808 kubelet[3723]: I1020 00:11:37.573311    3723 scope.go:110] "RemoveContainer" containerID="5b267fc588b14ba5760d1fa9658c03ce34eaae59a685d96ef491b6c4ba421bb7"
	Oct 20 00:11:37 embed-certs-20211020000119-6808 kubelet[3723]: E1020 00:11:37.573754    3723 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5b267fc588b14ba5760d1fa9658c03ce34eaae59a685d96ef491b6c4ba421bb7\": not found" containerID="5b267fc588b14ba5760d1fa9658c03ce34eaae59a685d96ef491b6c4ba421bb7"
	Oct 20 00:11:37 embed-certs-20211020000119-6808 kubelet[3723]: I1020 00:11:37.573813    3723 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:containerd ID:5b267fc588b14ba5760d1fa9658c03ce34eaae59a685d96ef491b6c4ba421bb7} err="failed to get container status \"5b267fc588b14ba5760d1fa9658c03ce34eaae59a685d96ef491b6c4ba421bb7\": rpc error: code = NotFound desc = an error occurred when try to find container \"5b267fc588b14ba5760d1fa9658c03ce34eaae59a685d96ef491b6c4ba421bb7\": not found"
	Oct 20 00:11:38 embed-certs-20211020000119-6808 kubelet[3723]: I1020 00:11:38.340293    3723 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=edf2e828-f91c-4099-880f-04cced3cf065 path="/var/lib/kubelet/pods/edf2e828-f91c-4099-880f-04cced3cf065/volumes"
	Oct 20 00:11:42 embed-certs-20211020000119-6808 kubelet[3723]: E1020 00:11:42.387977    3723 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Oct 20 00:11:42 embed-certs-20211020000119-6808 kubelet[3723]: E1020 00:11:42.388032    3723 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Oct 20 00:11:42 embed-certs-20211020000119-6808 kubelet[3723]: E1020 00:11:42.388194    3723 kuberuntime_manager.go:895] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-z6hkc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler
{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]Vo
lumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-4qx7n_kube-system(2d5a6732-f488-4534-a9dc-6d0e07139c47): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host
	Oct 20 00:11:42 embed-certs-20211020000119-6808 kubelet[3723]: E1020 00:11:42.388255    3723 pod_workers.go:765] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-4qx7n" podUID=2d5a6732-f488-4534-a9dc-6d0e07139c47
	Oct 20 00:11:48 embed-certs-20211020000119-6808 kubelet[3723]: I1020 00:11:48.335327    3723 scope.go:110] "RemoveContainer" containerID="4730553631cb302673c4a3870c1e964ff5341db998aad13dc0ebbe9e16b8c2f5"
	Oct 20 00:11:49 embed-certs-20211020000119-6808 kubelet[3723]: I1020 00:11:49.592435    3723 scope.go:110] "RemoveContainer" containerID="4730553631cb302673c4a3870c1e964ff5341db998aad13dc0ebbe9e16b8c2f5"
	Oct 20 00:11:49 embed-certs-20211020000119-6808 kubelet[3723]: I1020 00:11:49.592729    3723 scope.go:110] "RemoveContainer" containerID="6bbb4c44400b641cc5dd9085550804112419b62d39d659e692e54febb76d61f5"
	Oct 20 00:11:49 embed-certs-20211020000119-6808 kubelet[3723]: E1020 00:11:49.593093    3723 pod_workers.go:765] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-lhdc7_kubernetes-dashboard(68ecd0f7-06c3-4470-a39c-90bbfdd9bc32)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-lhdc7" podUID=68ecd0f7-06c3-4470-a39c-90bbfdd9bc32
	
	* 
	* ==> kubernetes-dashboard [de8122a69ebdc8c738ab77d58119c3ff12b556235bc9a22f62e47a9682432b06] <==
	* 2021/10/20 00:11:28 Using namespace: kubernetes-dashboard
	2021/10/20 00:11:28 Using in-cluster config to connect to apiserver
	2021/10/20 00:11:28 Using secret token for csrf signing
	2021/10/20 00:11:28 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/10/20 00:11:28 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/10/20 00:11:28 Successful initial request to the apiserver, version: v1.22.2
	2021/10/20 00:11:28 Generating JWE encryption key
	2021/10/20 00:11:28 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/10/20 00:11:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/10/20 00:11:28 Initializing JWE encryption key from synchronized object
	2021/10/20 00:11:28 Creating in-cluster Sidecar client
	2021/10/20 00:11:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/10/20 00:11:28 Serving insecurely on HTTP port: 9090
	2021/10/20 00:11:28 Starting overwatch
	
	* 
	* ==> storage-provisioner [509c6eb7c34d77e290f5262c497542e1c2021b9517a4dd32c6b81c77b5096275] <==
	* I1020 00:11:27.242993       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1020 00:11:27.250547       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1020 00:11:27.250584       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1020 00:11:27.317471       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1020 00:11:27.317644       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-20211020000119-6808_d353d518-169b-4346-b91f-e15b4e84e8bc!
	I1020 00:11:27.317702       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a6721c57-b983-4598-9c9b-2172ad9bb06d", APIVersion:"v1", ResourceVersion:"579", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-20211020000119-6808_d353d518-169b-4346-b91f-e15b4e84e8bc became leader
	I1020 00:11:27.420177       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-20211020000119-6808_d353d518-169b-4346-b91f-e15b4e84e8bc!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20211020000119-6808 -n embed-certs-20211020000119-6808
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-20211020000119-6808 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-7c784ccb57-4qx7n
helpers_test.go:272: ======> post-mortem[TestStartStop/group/embed-certs/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context embed-certs-20211020000119-6808 describe pod metrics-server-7c784ccb57-4qx7n
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context embed-certs-20211020000119-6808 describe pod metrics-server-7c784ccb57-4qx7n: exit status 1 (83.516533ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-7c784ccb57-4qx7n" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context embed-certs-20211020000119-6808 describe pod metrics-server-7c784ccb57-4qx7n: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (6.24s)

                                                
                                    

Test pass (240/269)

Order passed test Duration
3 TestDownloadOnly/v1.14.0/json-events 9.91
4 TestDownloadOnly/v1.14.0/preload-exists 0
8 TestDownloadOnly/v1.14.0/LogsDuration 0.06
10 TestDownloadOnly/v1.22.2/json-events 9.13
11 TestDownloadOnly/v1.22.2/preload-exists 0
15 TestDownloadOnly/v1.22.2/LogsDuration 0.06
17 TestDownloadOnly/v1.22.3-rc.0/json-events 9.02
18 TestDownloadOnly/v1.22.3-rc.0/preload-exists 0
22 TestDownloadOnly/v1.22.3-rc.0/LogsDuration 0.06
23 TestDownloadOnly/DeleteAll 0.36
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.22
25 TestDownloadOnlyKic 12.04
26 TestOffline 97.09
28 TestAddons/Setup 132.21
30 TestAddons/parallel/Registry 21.78
31 TestAddons/parallel/Ingress 47.98
32 TestAddons/parallel/MetricsServer 6.69
33 TestAddons/parallel/HelmTiller 12.22
34 TestAddons/parallel/Olm 43.47
35 TestAddons/parallel/CSI 60.48
38 TestAddons/StoppedEnableDisable 20.73
39 TestCertOptions 61.33
40 TestCertExpiration 281.17
42 TestForceSystemdFlag 52.74
43 TestForceSystemdEnv 71.11
44 TestKVMDriverInstallOrUpdate 2.75
48 TestErrorSpam/setup 42.95
49 TestErrorSpam/start 0.94
50 TestErrorSpam/status 1.11
51 TestErrorSpam/pause 2.49
52 TestErrorSpam/unpause 1.44
53 TestErrorSpam/stop 23.42
56 TestFunctional/serial/CopySyncFile 0
57 TestFunctional/serial/StartWithProxy 70.48
58 TestFunctional/serial/AuditLog 0
59 TestFunctional/serial/SoftStart 15.48
60 TestFunctional/serial/KubeContext 0.04
61 TestFunctional/serial/KubectlGetPods 0.21
64 TestFunctional/serial/CacheCmd/cache/add_remote 2.37
65 TestFunctional/serial/CacheCmd/cache/add_local 1.38
66 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.05
67 TestFunctional/serial/CacheCmd/cache/list 0.05
68 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.35
69 TestFunctional/serial/CacheCmd/cache/cache_reload 1.97
70 TestFunctional/serial/CacheCmd/cache/delete 0.11
71 TestFunctional/serial/MinikubeKubectlCmd 0.14
72 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
73 TestFunctional/serial/ExtraConfig 42.2
74 TestFunctional/serial/ComponentHealth 0.07
75 TestFunctional/serial/LogsCmd 1.03
78 TestFunctional/parallel/ConfigCmd 0.43
79 TestFunctional/parallel/DashboardCmd 2.59
80 TestFunctional/parallel/DryRun 0.81
81 TestFunctional/parallel/InternationalLanguage 0.24
82 TestFunctional/parallel/StatusCmd 1.64
85 TestFunctional/parallel/ServiceCmd 11.7
86 TestFunctional/parallel/AddonsCmd 0.15
87 TestFunctional/parallel/PersistentVolumeClaim 27.02
89 TestFunctional/parallel/SSHCmd 0.68
90 TestFunctional/parallel/CpCmd 0.79
91 TestFunctional/parallel/MySQL 20.06
92 TestFunctional/parallel/FileSync 0.39
93 TestFunctional/parallel/CertSync 2.35
97 TestFunctional/parallel/NodeLabels 0.08
99 TestFunctional/parallel/NonActiveRuntimeDisabled 0.76
101 TestFunctional/parallel/ProfileCmd/profile_not_create 0.59
102 TestFunctional/parallel/MountCmd/any-port 14.94
103 TestFunctional/parallel/Version/short 0.06
104 TestFunctional/parallel/Version/components 1.06
105 TestFunctional/parallel/ProfileCmd/profile_list 0.46
106 TestFunctional/parallel/ImageCommands/ImageList 0.25
107 TestFunctional/parallel/ImageCommands/ImageBuild 2.82
108 TestFunctional/parallel/ImageCommands/Setup 0.82
109 TestFunctional/parallel/ProfileCmd/profile_json_output 0.51
110 TestFunctional/parallel/UpdateContextCmd/no_changes 0.18
111 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.2
112 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
113 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.7
114 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.34
115 TestFunctional/parallel/ImageCommands/ImageRemove 0.68
116 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.94
117 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.54
119 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
121 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.22
122 TestFunctional/parallel/MountCmd/specific-port 2.08
123 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
124 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
128 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
129 TestFunctional/delete_addon-resizer_images 0.1
130 TestFunctional/delete_my-image_image 0.04
131 TestFunctional/delete_minikube_cached_images 0.04
135 TestJSONOutput/start/Command 70.52
136 TestJSONOutput/start/Audit 0
138 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
139 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
141 TestJSONOutput/pause/Command 0.62
142 TestJSONOutput/pause/Audit 0
144 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
145 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
147 TestJSONOutput/unpause/Command 0.59
148 TestJSONOutput/unpause/Audit 0
150 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
151 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
153 TestJSONOutput/stop/Command 23.94
154 TestJSONOutput/stop/Audit 0
156 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
157 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
158 TestErrorJSONOutput 0.33
160 TestKicCustomNetwork/create_custom_network 29.66
161 TestKicCustomNetwork/use_default_bridge_network 23.8
162 TestKicExistingNetwork 24.26
163 TestMainNoArgs 0.05
166 TestMountStart/serial/StartWithMountFirst 43.7
167 TestMountStart/serial/StartWithMountSecond 43.92
168 TestMountStart/serial/VerifyMountFirst 0.33
169 TestMountStart/serial/VerifyMountSecond 0.33
170 TestMountStart/serial/DeleteFirst 2.85
171 TestMountStart/serial/VerifyMountPostDelete 0.35
172 TestMountStart/serial/Stop 20.51
173 TestMountStart/serial/RestartStopped 34.15
174 TestMountStart/serial/VerifyMountPostStop 0.32
177 TestMultiNode/serial/FreshStart2Nodes 110.2
178 TestMultiNode/serial/DeployApp2Nodes 4.87
179 TestMultiNode/serial/PingHostFrom2Pods 0.79
180 TestMultiNode/serial/AddNode 41.97
181 TestMultiNode/serial/ProfileList 0.36
182 TestMultiNode/serial/CopyFile 2.82
183 TestMultiNode/serial/StopNode 21.81
184 TestMultiNode/serial/StartAfterStop 35.79
185 TestMultiNode/serial/RestartKeepsNodes 188.77
186 TestMultiNode/serial/DeleteNode 25.03
187 TestMultiNode/serial/StopMultiNode 41.51
188 TestMultiNode/serial/RestartMultiNode 109.43
189 TestMultiNode/serial/ValidateNameConflict 46.43
193 TestPreload 129.9
198 TestInsufficientStorage 13.37
199 TestRunningBinaryUpgrade 103.06
201 TestKubernetesUpgrade 194.63
202 TestMissingContainerUpgrade 168.37
204 TestPause/serial/Start 78.74
205 TestStoppedBinaryUpgrade/Upgrade 120.47
213 TestNetworkPlugins/group/false 0.77
217 TestPause/serial/SecondStartNoReconfiguration 15.96
218 TestPause/serial/Pause 0.86
219 TestPause/serial/VerifyStatus 0.49
220 TestPause/serial/Unpause 0.98
222 TestPause/serial/DeletePaused 3.25
223 TestPause/serial/VerifyDeletedResources 1.32
224 TestStoppedBinaryUpgrade/MinikubeLogs 0.91
232 TestNetworkPlugins/group/auto/Start 62.02
233 TestNetworkPlugins/group/custom-weave/Start 82.96
234 TestNetworkPlugins/group/cilium/Start 84.04
235 TestNetworkPlugins/group/auto/KubeletFlags 0.35
236 TestNetworkPlugins/group/auto/NetCatPod 9.39
237 TestNetworkPlugins/group/auto/DNS 0.16
238 TestNetworkPlugins/group/auto/Localhost 0.14
239 TestNetworkPlugins/group/auto/HairPin 0.14
240 TestNetworkPlugins/group/calico/Start 89.32
241 TestNetworkPlugins/group/custom-weave/KubeletFlags 0.43
242 TestNetworkPlugins/group/custom-weave/NetCatPod 11.45
243 TestNetworkPlugins/group/enable-default-cni/Start 64.58
244 TestNetworkPlugins/group/cilium/ControllerPod 5.02
245 TestNetworkPlugins/group/cilium/KubeletFlags 0.43
246 TestNetworkPlugins/group/cilium/NetCatPod 13.23
247 TestNetworkPlugins/group/kindnet/Start 78.93
248 TestNetworkPlugins/group/cilium/DNS 0.18
249 TestNetworkPlugins/group/cilium/Localhost 0.14
250 TestNetworkPlugins/group/cilium/HairPin 0.14
251 TestNetworkPlugins/group/bridge/Start 64.24
252 TestNetworkPlugins/group/calico/ControllerPod 5.02
253 TestNetworkPlugins/group/calico/KubeletFlags 0.35
254 TestNetworkPlugins/group/calico/NetCatPod 20.3
255 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.38
256 TestNetworkPlugins/group/calico/DNS 0.2
257 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.21
258 TestNetworkPlugins/group/calico/Localhost 0.17
259 TestNetworkPlugins/group/calico/HairPin 0.16
261 TestStartStop/group/old-k8s-version/serial/FirstStart 111.13
262 TestNetworkPlugins/group/enable-default-cni/DNS 0.21
263 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
264 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
266 TestStartStop/group/no-preload/serial/FirstStart 91.89
267 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
268 TestNetworkPlugins/group/kindnet/KubeletFlags 0.95
269 TestNetworkPlugins/group/bridge/KubeletFlags 0.69
270 TestNetworkPlugins/group/kindnet/NetCatPod 20.7
271 TestNetworkPlugins/group/bridge/NetCatPod 21.57
272 TestNetworkPlugins/group/kindnet/DNS 0.16
273 TestNetworkPlugins/group/kindnet/Localhost 0.13
274 TestNetworkPlugins/group/kindnet/HairPin 0.14
275 TestNetworkPlugins/group/bridge/DNS 0.21
276 TestNetworkPlugins/group/bridge/Localhost 0.15
277 TestNetworkPlugins/group/bridge/HairPin 0.14
279 TestStartStop/group/embed-certs/serial/FirstStart 258.81
281 TestStartStop/group/newest-cni/serial/FirstStart 58.35
282 TestStartStop/group/no-preload/serial/DeployApp 9.37
283 TestStartStop/group/old-k8s-version/serial/DeployApp 9.36
284 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.7
285 TestStartStop/group/no-preload/serial/Stop 20.72
286 TestStartStop/group/newest-cni/serial/DeployApp 0
287 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.55
288 TestStartStop/group/newest-cni/serial/Stop 20.74
289 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.57
290 TestStartStop/group/old-k8s-version/serial/Stop 20.84
291 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
292 TestStartStop/group/no-preload/serial/SecondStart 327.95
293 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
294 TestStartStop/group/newest-cni/serial/SecondStart 36.33
295 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
296 TestStartStop/group/old-k8s-version/serial/SecondStart 455.96
297 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
298 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
299 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.37
300 TestStartStop/group/newest-cni/serial/Pause 2.75
302 TestStartStop/group/default-k8s-different-port/serial/FirstStart 61.49
303 TestStartStop/group/default-k8s-different-port/serial/DeployApp 9.45
304 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 0.67
305 TestStartStop/group/default-k8s-different-port/serial/Stop 20.79
306 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 0.2
307 TestStartStop/group/default-k8s-different-port/serial/SecondStart 328.1
308 TestStartStop/group/embed-certs/serial/DeployApp 9.31
309 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.66
310 TestStartStop/group/embed-certs/serial/Stop 20.62
311 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
312 TestStartStop/group/embed-certs/serial/SecondStart 330.61
313 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.01
314 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
315 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.36
316 TestStartStop/group/no-preload/serial/Pause 2.96
317 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
318 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 5.01
319 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.22
320 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 5.08
321 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.35
322 TestStartStop/group/old-k8s-version/serial/Pause 2.95
323 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 0.38
324 TestStartStop/group/default-k8s-different-port/serial/Pause 3.21
325 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.01
326 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.19
327 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.35
x
+
TestDownloadOnly/v1.14.0/json-events (9.91s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20211019231602-6808 --force --alsologtostderr --kubernetes-version=v1.14.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20211019231602-6808 --force --alsologtostderr --kubernetes-version=v1.14.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (9.90630914s)
--- PASS: TestDownloadOnly/v1.14.0/json-events (9.91s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/preload-exists
--- PASS: TestDownloadOnly/v1.14.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20211019231602-6808
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20211019231602-6808: exit status 85 (60.321835ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/10/19 23:16:02
	Running on machine: debian-jenkins-agent-14
	Binary: Built with gc go1.17.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 23:16:02.576605    6820 out.go:297] Setting OutFile to fd 1 ...
	I1019 23:16:02.576781    6820 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1019 23:16:02.576789    6820 out.go:310] Setting ErrFile to fd 2...
	I1019 23:16:02.576793    6820 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1019 23:16:02.576869    6820 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/bin
	W1019 23:16:02.576960    6820 root.go:291] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/config/config.json: no such file or directory
	I1019 23:16:02.577154    6820 out.go:304] Setting JSON to true
	I1019 23:16:02.612904    6820 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":3525,"bootTime":1634681838,"procs":138,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I1019 23:16:02.612978    6820 start.go:121] virtualization: kvm guest
	I1019 23:16:02.615605    6820 notify.go:169] Checking for updates...
	I1019 23:16:02.617450    6820 driver.go:343] Setting default libvirt URI to qemu:///system
	I1019 23:16:02.662359    6820 docker.go:132] docker version: linux-19.03.15
	I1019 23:16:02.662436    6820 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1019 23:16:02.953434    6820 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:188 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:21 OomKillDisable:true NGoroutines:34 SystemTime:2021-10-19 23:16:02.694665224 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1019 23:16:02.953531    6820 docker.go:237] overlay module found
	I1019 23:16:02.955297    6820 start.go:278] selected driver: docker
	I1019 23:16:02.955307    6820 start.go:751] validating driver "docker" against <nil>
	I1019 23:16:02.955738    6820 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1019 23:16:03.030874    6820 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:188 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:21 OomKillDisable:true NGoroutines:34 SystemTime:2021-10-19 23:16:02.987999921 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1019 23:16:03.030958    6820 start_flags.go:266] no existing cluster config was found, will generate one from the flags 
	I1019 23:16:03.031388    6820 start_flags.go:347] Using suggested 8000MB memory alloc based on sys=32179MB, container=32179MB
	I1019 23:16:03.031471    6820 start_flags.go:730] Wait components to verify : map[apiserver:true system_pods:true]
	I1019 23:16:03.031490    6820 cni.go:93] Creating CNI manager for ""
	I1019 23:16:03.031500    6820 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1019 23:16:03.031514    6820 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I1019 23:16:03.031522    6820 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I1019 23:16:03.031526    6820 start_flags.go:275] Found "CNI" CNI - setting NetworkPlugin=cni
	I1019 23:16:03.031578    6820 start_flags.go:280] config:
	{Name:download-only-20211019231602-6808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.27-1633027942-12081@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:download-only-20211019231602-6808 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host}
	I1019 23:16:03.033422    6820 cache.go:118] Beginning downloading kic base image for docker with containerd
	I1019 23:16:03.034817    6820 preload.go:130] Checking if preload exists for k8s version v1.14.0 and runtime containerd
	I1019 23:16:03.034851    6820 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.27-1633027942-12081@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1019 23:16:03.069920    6820 preload.go:117] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v13-v1.14.0-containerd-overlay2-amd64.tar.lz4
	I1019 23:16:03.069940    6820 cache.go:57] Caching tarball of preloaded images
	I1019 23:16:03.070170    6820 preload.go:130] Checking if preload exists for k8s version v1.14.0 and runtime containerd
	I1019 23:16:03.072046    6820 preload.go:236] getting checksum for preloaded-images-k8s-v13-v1.14.0-containerd-overlay2-amd64.tar.lz4 ...
	I1019 23:16:03.095193    6820 download.go:92] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v13-v1.14.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:3eb9ea3676a16b2a9b86f4cd699acbc7 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v13-v1.14.0-containerd-overlay2-amd64.tar.lz4
	I1019 23:16:03.128428    6820 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.27-1633027942-12081@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1019 23:16:03.128446    6820 cache.go:140] gcr.io/k8s-minikube/kicbase-builds:v0.0.27-1633027942-12081@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20211019231602-6808"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.14.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.2/json-events (9.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20211019231602-6808 --force --alsologtostderr --kubernetes-version=v1.22.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20211019231602-6808 --force --alsologtostderr --kubernetes-version=v1.22.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (9.132581958s)
--- PASS: TestDownloadOnly/v1.22.2/json-events (9.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.2/preload-exists
--- PASS: TestDownloadOnly/v1.22.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.2/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.2/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20211019231602-6808
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20211019231602-6808: exit status 85 (61.651418ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/10/19 23:16:12
	Running on machine: debian-jenkins-agent-14
	Binary: Built with gc go1.17.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 23:16:12.546020    6962 out.go:297] Setting OutFile to fd 1 ...
	I1019 23:16:12.546189    6962 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1019 23:16:12.546197    6962 out.go:310] Setting ErrFile to fd 2...
	I1019 23:16:12.546201    6962 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1019 23:16:12.546293    6962 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/bin
	W1019 23:16:12.546382    6962 root.go:291] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/config/config.json: no such file or directory
	I1019 23:16:12.546475    6962 out.go:304] Setting JSON to true
	I1019 23:16:12.580196    6962 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":3535,"bootTime":1634681838,"procs":138,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I1019 23:16:12.580281    6962 start.go:121] virtualization: kvm guest
	I1019 23:16:12.582526    6962 notify.go:169] Checking for updates...
	I1019 23:16:12.584393    6962 config.go:176] Loaded profile config "download-only-20211019231602-6808": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.14.0
	W1019 23:16:12.584434    6962 start.go:659] api.Load failed for download-only-20211019231602-6808: filestore "download-only-20211019231602-6808": Docker machine "download-only-20211019231602-6808" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1019 23:16:12.584467    6962 driver.go:343] Setting default libvirt URI to qemu:///system
	W1019 23:16:12.584493    6962 start.go:659] api.Load failed for download-only-20211019231602-6808: filestore "download-only-20211019231602-6808": Docker machine "download-only-20211019231602-6808" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1019 23:16:12.628596    6962 docker.go:132] docker version: linux-19.03.15
	I1019 23:16:12.628673    6962 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1019 23:16:12.709412    6962 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:188 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:21 OomKillDisable:true NGoroutines:34 SystemTime:2021-10-19 23:16:12.660622441 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1019 23:16:12.709510    6962 docker.go:237] overlay module found
	I1019 23:16:12.711216    6962 start.go:278] selected driver: docker
	I1019 23:16:12.711228    6962 start.go:751] validating driver "docker" against &{Name:download-only-20211019231602-6808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.27-1633027942-12081@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:download-only-20211019231602-6808 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host}
	I1019 23:16:12.711794    6962 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1019 23:16:12.784308    6962 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:188 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:21 OomKillDisable:true NGoroutines:34 SystemTime:2021-10-19 23:16:12.743233178 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1019 23:16:12.784792    6962 cni.go:93] Creating CNI manager for ""
	I1019 23:16:12.784809    6962 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1019 23:16:12.784824    6962 start_flags.go:280] config:
	{Name:download-only-20211019231602-6808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.27-1633027942-12081@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.2 ClusterName:download-only-20211019231602-6808 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host}
	I1019 23:16:12.786601    6962 cache.go:118] Beginning downloading kic base image for docker with containerd
	I1019 23:16:12.787895    6962 preload.go:130] Checking if preload exists for k8s version v1.22.2 and runtime containerd
	I1019 23:16:12.787929    6962 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.27-1633027942-12081@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1019 23:16:12.808212    6962 preload.go:117] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v13-v1.22.2-containerd-overlay2-amd64.tar.lz4
	I1019 23:16:12.808235    6962 cache.go:57] Caching tarball of preloaded images
	I1019 23:16:12.808460    6962 preload.go:130] Checking if preload exists for k8s version v1.22.2 and runtime containerd
	I1019 23:16:12.810176    6962 preload.go:236] getting checksum for preloaded-images-k8s-v13-v1.22.2-containerd-overlay2-amd64.tar.lz4 ...
	I1019 23:16:12.836495    6962 download.go:92] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v13-v1.22.2-containerd-overlay2-amd64.tar.lz4?checksum=md5:3dbe6de722ec0eddffd1862f06af8a2f -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v13-v1.22.2-containerd-overlay2-amd64.tar.lz4
	I1019 23:16:12.880665    6962 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.27-1633027942-12081@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1019 23:16:12.880695    6962 cache.go:140] gcr.io/k8s-minikube/kicbase-builds:v0.0.27-1633027942-12081@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20211019231602-6808"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.22.2/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.3-rc.0/json-events (9.02s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.3-rc.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20211019231602-6808 --force --alsologtostderr --kubernetes-version=v1.22.3-rc.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20211019231602-6808 --force --alsologtostderr --kubernetes-version=v1.22.3-rc.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (9.024237335s)
--- PASS: TestDownloadOnly/v1.22.3-rc.0/json-events (9.02s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.3-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.3-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.22.3-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.3-rc.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.3-rc.0/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20211019231602-6808
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20211019231602-6808: exit status 85 (60.320281ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/10/19 23:16:21
	Running on machine: debian-jenkins-agent-14
	Binary: Built with gc go1.17.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 23:16:21.738139    7103 out.go:297] Setting OutFile to fd 1 ...
	I1019 23:16:21.738206    7103 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1019 23:16:21.738210    7103 out.go:310] Setting ErrFile to fd 2...
	I1019 23:16:21.738213    7103 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1019 23:16:21.738303    7103 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/bin
	W1019 23:16:21.738403    7103 root.go:291] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/config/config.json: no such file or directory
	I1019 23:16:21.738489    7103 out.go:304] Setting JSON to true
	I1019 23:16:21.772209    7103 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":3544,"bootTime":1634681838,"procs":138,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I1019 23:16:21.772301    7103 start.go:121] virtualization: kvm guest
	I1019 23:16:21.774431    7103 notify.go:169] Checking for updates...
	I1019 23:16:21.776863    7103 config.go:176] Loaded profile config "download-only-20211019231602-6808": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.2
	W1019 23:16:21.776932    7103 start.go:659] api.Load failed for download-only-20211019231602-6808: filestore "download-only-20211019231602-6808": Docker machine "download-only-20211019231602-6808" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1019 23:16:21.776979    7103 driver.go:343] Setting default libvirt URI to qemu:///system
	W1019 23:16:21.777226    7103 start.go:659] api.Load failed for download-only-20211019231602-6808: filestore "download-only-20211019231602-6808": Docker machine "download-only-20211019231602-6808" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1019 23:16:21.823942    7103 docker.go:132] docker version: linux-19.03.15
	I1019 23:16:21.824017    7103 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1019 23:16:21.900189    7103 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:188 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:21 OomKillDisable:true NGoroutines:34 SystemTime:2021-10-19 23:16:21.85761538 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1019 23:16:21.900271    7103 docker.go:237] overlay module found
	I1019 23:16:21.901904    7103 start.go:278] selected driver: docker
	I1019 23:16:21.901922    7103 start.go:751] validating driver "docker" against &{Name:download-only-20211019231602-6808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.27-1633027942-12081@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.2 ClusterName:download-only-20211019231602-6808 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.2 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host}
	I1019 23:16:21.902392    7103 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1019 23:16:21.977022    7103 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:188 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:21 OomKillDisable:true NGoroutines:34 SystemTime:2021-10-19 23:16:21.933857129 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1019 23:16:21.977528    7103 cni.go:93] Creating CNI manager for ""
	I1019 23:16:21.977544    7103 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1019 23:16:21.977553    7103 start_flags.go:280] config:
	{Name:download-only-20211019231602-6808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.27-1633027942-12081@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3-rc.0 ClusterName:download-only-20211019231602-6808 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.2 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host}
	I1019 23:16:21.979399    7103 cache.go:118] Beginning downloading kic base image for docker with containerd
	I1019 23:16:21.980839    7103 preload.go:130] Checking if preload exists for k8s version v1.22.3-rc.0 and runtime containerd
	I1019 23:16:21.980943    7103 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.27-1633027942-12081@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1019 23:16:22.001744    7103 preload.go:117] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v13-v1.22.3-rc.0-containerd-overlay2-amd64.tar.lz4
	I1019 23:16:22.001777    7103 cache.go:57] Caching tarball of preloaded images
	I1019 23:16:22.002067    7103 preload.go:130] Checking if preload exists for k8s version v1.22.3-rc.0 and runtime containerd
	I1019 23:16:22.003753    7103 preload.go:236] getting checksum for preloaded-images-k8s-v13-v1.22.3-rc.0-containerd-overlay2-amd64.tar.lz4 ...
	I1019 23:16:22.027392    7103 download.go:92] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v13-v1.22.3-rc.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:a10b30b9496e096242d80bb751d36623 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v13-v1.22.3-rc.0-containerd-overlay2-amd64.tar.lz4
	I1019 23:16:22.073571    7103 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.27-1633027942-12081@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1019 23:16:22.073606    7103 cache.go:140] gcr.io/k8s-minikube/kicbase-builds:v0.0.27-1633027942-12081@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20211019231602-6808"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.22.3-rc.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.36s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:189: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.36s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:201: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-20211019231602-6808
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestDownloadOnlyKic (12.04s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:226: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-20211019231631-6808 --force --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:226: (dbg) Done: out/minikube-linux-amd64 start --download-only -p download-docker-20211019231631-6808 --force --alsologtostderr --driver=docker  --container-runtime=containerd: (7.312610827s)
helpers_test.go:175: Cleaning up "download-docker-20211019231631-6808" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-20211019231631-6808
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p download-docker-20211019231631-6808: (2.770710248s)
--- PASS: TestDownloadOnlyKic (12.04s)

                                                
                                    
x
+
TestOffline (97.09s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:56: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-20211019235243-6808 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:56: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-20211019235243-6808 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd: (1m33.590767821s)
helpers_test.go:175: Cleaning up "offline-containerd-20211019235243-6808" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-20211019235243-6808

                                                
                                                
=== CONT  TestOffline
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-20211019235243-6808: (3.501397039s)
--- PASS: TestOffline (97.09s)

                                                
                                    
x
+
TestAddons/Setup (132.21s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p addons-20211019231643-6808 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=olm --addons=volumesnapshots --addons=csi-hostpath-driver --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p addons-20211019231643-6808 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=olm --addons=volumesnapshots --addons=csi-hostpath-driver --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m6.973097128s)
addons_test.go:89: (dbg) Run:  out/minikube-linux-amd64 -p addons-20211019231643-6808 addons enable gcp-auth
addons_test.go:99: (dbg) Run:  out/minikube-linux-amd64 -p addons-20211019231643-6808 addons enable gcp-auth --force
addons_test.go:99: (dbg) Done: out/minikube-linux-amd64 -p addons-20211019231643-6808 addons enable gcp-auth --force: (4.861931342s)
--- PASS: TestAddons/Setup (132.21s)

                                                
                                    
x
+
TestAddons/parallel/Registry (21.78s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:293: registry stabilized in 17.341115ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:295: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:342: "registry-bsxpn" [a1f96410-ffd9-4aa0-b501-2936a051c7d4] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:295: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.621282298s

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:298: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:342: "registry-proxy-9ddbd" [e75fbf66-a879-4a83-b118-2479b2b42449] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:298: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.493318942s
addons_test.go:303: (dbg) Run:  kubectl --context addons-20211019231643-6808 delete po -l run=registry-test --now
addons_test.go:308: (dbg) Run:  kubectl --context addons-20211019231643-6808 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:308: (dbg) Done: kubectl --context addons-20211019231643-6808 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (9.83120201s)
addons_test.go:322: (dbg) Run:  out/minikube-linux-amd64 -p addons-20211019231643-6808 ip
2021/10/19 23:19:17 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p addons-20211019231643-6808 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (21.78s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (47.98s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:188: (dbg) Run:  kubectl --context addons-20211019231643-6808 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:195: (dbg) Run:  kubectl --context addons-20211019231643-6808 replace --force -f testdata/nginx-ingv1.yaml
addons_test.go:208: (dbg) Run:  kubectl --context addons-20211019231643-6808 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:213: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [4454ff36-7686-40f4-9ff9-4f9f29decafe] Pending
helpers_test.go:342: "nginx" [4454ff36-7686-40f4-9ff9-4f9f29decafe] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:342: "nginx" [4454ff36-7686-40f4-9ff9-4f9f29decafe] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:213: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 16.060095185s
addons_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p addons-20211019231643-6808 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:249: (dbg) Run:  kubectl --context addons-20211019231643-6808 replace --force -f testdata/ingress-dns-example.yaml
addons_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p addons-20211019231643-6808 ip
addons_test.go:260: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:269: (dbg) Run:  out/minikube-linux-amd64 -p addons-20211019231643-6808 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:269: (dbg) Done: out/minikube-linux-amd64 -p addons-20211019231643-6808 addons disable ingress-dns --alsologtostderr -v=1: (1.178184847s)
addons_test.go:274: (dbg) Run:  out/minikube-linux-amd64 -p addons-20211019231643-6808 addons disable ingress --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:274: (dbg) Done: out/minikube-linux-amd64 -p addons-20211019231643-6808 addons disable ingress --alsologtostderr -v=1: (29.107803648s)
--- PASS: TestAddons/parallel/Ingress (47.98s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.69s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:370: metrics-server stabilized in 13.828024ms

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:372: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
helpers_test.go:342: "metrics-server-77c99ccb96-m99jq" [be096bdf-c8da-4b3d-81ea-648956a0dfa9] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:372: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.623363022s
addons_test.go:378: (dbg) Run:  kubectl --context addons-20211019231643-6808 top pods -n kube-system

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p addons-20211019231643-6808 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.69s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (12.22s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:419: tiller-deploy stabilized in 12.200286ms

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:421: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
helpers_test.go:342: "tiller-deploy-64b546c44b-qgh9p" [19d16b94-c86b-44ed-9b95-f9ecf3979406] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:421: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.624655862s
addons_test.go:436: (dbg) Run:  kubectl --context addons-20211019231643-6808 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:436: (dbg) Done: kubectl --context addons-20211019231643-6808 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.248048879s)
addons_test.go:453: (dbg) Run:  out/minikube-linux-amd64 -p addons-20211019231643-6808 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (12.22s)

                                                
                                    
x
+
TestAddons/parallel/Olm (43.47s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:465: (dbg) Run:  kubectl --context addons-20211019231643-6808 wait --for=condition=ready --namespace=olm pod --selector=app=catalog-operator --timeout=90s
addons_test.go:468: catalog-operator stabilized in 101.209783ms
addons_test.go:470: (dbg) Run:  kubectl --context addons-20211019231643-6808 wait --for=condition=ready --namespace=olm pod --selector=app=olm-operator --timeout=90s
addons_test.go:473: olm-operator stabilized in 192.852611ms
addons_test.go:475: (dbg) Run:  kubectl --context addons-20211019231643-6808 wait --for=condition=ready --namespace=olm pod --selector=app=packageserver --timeout=90s
addons_test.go:478: packageserver stabilized in 307.074313ms
addons_test.go:480: (dbg) Run:  kubectl --context addons-20211019231643-6808 wait --for=condition=ready --namespace=olm pod --selector=olm.catalogSource=operatorhubio-catalog --timeout=90s
addons_test.go:483: operatorhubio-catalog stabilized in 391.993092ms
addons_test.go:486: (dbg) Run:  kubectl --context addons-20211019231643-6808 create -f testdata/etcd.yaml
addons_test.go:493: (dbg) Run:  kubectl --context addons-20211019231643-6808 get csv -n my-etcd
addons_test.go:498: kubectl --context addons-20211019231643-6808 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.
addons_test.go:493: (dbg) Run:  kubectl --context addons-20211019231643-6808 get csv -n my-etcd
addons_test.go:498: kubectl --context addons-20211019231643-6808 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:493: (dbg) Run:  kubectl --context addons-20211019231643-6808 get csv -n my-etcd
addons_test.go:498: kubectl --context addons-20211019231643-6808 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:493: (dbg) Run:  kubectl --context addons-20211019231643-6808 get csv -n my-etcd

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: kubectl --context addons-20211019231643-6808 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:493: (dbg) Run:  kubectl --context addons-20211019231643-6808 get csv -n my-etcd

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:493: (dbg) Run:  kubectl --context addons-20211019231643-6808 get csv -n my-etcd
--- PASS: TestAddons/parallel/Olm (43.47s)

                                                
                                    
x
+
TestAddons/parallel/CSI (60.48s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:524: csi-hostpath-driver pods stabilized in 17.2739ms
addons_test.go:527: (dbg) Run:  kubectl --context addons-20211019231643-6808 create -f testdata/csi-hostpath-driver/pvc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:532: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20211019231643-6808 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:537: (dbg) Run:  kubectl --context addons-20211019231643-6808 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:542: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:342: "task-pv-pod" [d8431d3e-f6eb-4e42-bb19-b3f6a9485731] Pending
helpers_test.go:342: "task-pv-pod" [d8431d3e-f6eb-4e42-bb19-b3f6a9485731] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [d8431d3e-f6eb-4e42-bb19-b3f6a9485731] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:542: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 24.00592214s
addons_test.go:547: (dbg) Run:  kubectl --context addons-20211019231643-6808 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:552: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20211019231643-6808 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20211019231643-6808 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:557: (dbg) Run:  kubectl --context addons-20211019231643-6808 delete pod task-pv-pod
addons_test.go:557: (dbg) Done: kubectl --context addons-20211019231643-6808 delete pod task-pv-pod: (1.091051721s)
addons_test.go:563: (dbg) Run:  kubectl --context addons-20211019231643-6808 delete pvc hpvc
addons_test.go:569: (dbg) Run:  kubectl --context addons-20211019231643-6808 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:574: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20211019231643-6808 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:579: (dbg) Run:  kubectl --context addons-20211019231643-6808 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:584: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:342: "task-pv-pod-restore" [fb5da112-219a-400e-94a7-3078ccb20a10] Pending
helpers_test.go:342: "task-pv-pod-restore" [fb5da112-219a-400e-94a7-3078ccb20a10] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod-restore" [fb5da112-219a-400e-94a7-3078ccb20a10] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:584: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 24.006201767s
addons_test.go:589: (dbg) Run:  kubectl --context addons-20211019231643-6808 delete pod task-pv-pod-restore
addons_test.go:593: (dbg) Run:  kubectl --context addons-20211019231643-6808 delete pvc hpvc-restore
addons_test.go:597: (dbg) Run:  kubectl --context addons-20211019231643-6808 delete volumesnapshot new-snapshot-demo
addons_test.go:601: (dbg) Run:  out/minikube-linux-amd64 -p addons-20211019231643-6808 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:601: (dbg) Done: out/minikube-linux-amd64 -p addons-20211019231643-6808 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.879427528s)
addons_test.go:605: (dbg) Run:  out/minikube-linux-amd64 -p addons-20211019231643-6808 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (60.48s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (20.73s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-20211019231643-6808
addons_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p addons-20211019231643-6808: (20.53744444s)
addons_test.go:162: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-20211019231643-6808
addons_test.go:166: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-20211019231643-6808
--- PASS: TestAddons/StoppedEnableDisable (20.73s)

                                                
                                    
x
+
TestCertOptions (61.33s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:48: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-20211019235634-6808 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:48: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-20211019235634-6808 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (55.608187307s)
cert_options_test.go:59: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-20211019235634-6808 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:59: (dbg) Done: out/minikube-linux-amd64 -p cert-options-20211019235634-6808 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": (2.329761317s)
cert_options_test.go:74: (dbg) Run:  kubectl --context cert-options-20211019235634-6808 config view
helpers_test.go:175: Cleaning up "cert-options-20211019235634-6808" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-20211019235634-6808
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-20211019235634-6808: (3.346423744s)
--- PASS: TestCertOptions (61.33s)

                                                
                                    
x
+
TestCertExpiration (281.17s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-20211019235448-6808 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-20211019235448-6808 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (39.97566647s)

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:104: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-20211019235448-6808 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
E1019 23:58:55.846302    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/addons-20211019231643-6808/client.crt: no such file or directory

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:104: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-20211019235448-6808 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (57.970328687s)
helpers_test.go:175: Cleaning up "cert-expiration-20211019235448-6808" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-20211019235448-6808

                                                
                                                
=== CONT  TestCertExpiration
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-20211019235448-6808: (3.220376298s)
--- PASS: TestCertExpiration (281.17s)

                                                
                                    
x
+
TestForceSystemdFlag (52.74s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-20211019235541-6808 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-20211019235541-6808 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (48.076233485s)
docker_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-20211019235541-6808 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-20211019235541-6808" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-20211019235541-6808
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-20211019235541-6808: (4.232127035s)
--- PASS: TestForceSystemdFlag (52.74s)

                                                
                                    
x
+
TestForceSystemdEnv (71.11s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:151: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-20211019235243-6808 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:151: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-20211019235243-6808 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (1m7.681883156s)
docker_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-20211019235243-6808 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-20211019235243-6808" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-20211019235243-6808
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-20211019235243-6808: (3.076789215s)
--- PASS: TestForceSystemdEnv (71.11s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (2.75s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (2.75s)

                                                
                                    
x
+
TestErrorSpam/setup (42.95s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:79: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20211019232835-6808 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20211019232835-6808 --driver=docker  --container-runtime=containerd
E1019 23:28:55.845532    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/addons-20211019231643-6808/client.crt: no such file or directory
E1019 23:28:55.851138    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/addons-20211019231643-6808/client.crt: no such file or directory
E1019 23:28:55.861386    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/addons-20211019231643-6808/client.crt: no such file or directory
E1019 23:28:55.881613    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/addons-20211019231643-6808/client.crt: no such file or directory
E1019 23:28:55.921889    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/addons-20211019231643-6808/client.crt: no such file or directory
E1019 23:28:56.002169    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/addons-20211019231643-6808/client.crt: no such file or directory
E1019 23:28:56.162565    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/addons-20211019231643-6808/client.crt: no such file or directory
E1019 23:28:56.483103    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/addons-20211019231643-6808/client.crt: no such file or directory
E1019 23:28:57.123992    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/addons-20211019231643-6808/client.crt: no such file or directory
E1019 23:28:58.404423    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/addons-20211019231643-6808/client.crt: no such file or directory
E1019 23:29:00.966189    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/addons-20211019231643-6808/client.crt: no such file or directory
E1019 23:29:06.086712    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/addons-20211019231643-6808/client.crt: no such file or directory
E1019 23:29:16.327904    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/addons-20211019231643-6808/client.crt: no such file or directory
error_spam_test.go:79: (dbg) Done: out/minikube-linux-amd64 start -p nospam-20211019232835-6808 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20211019232835-6808 --driver=docker  --container-runtime=containerd: (42.950600515s)
error_spam_test.go:89: acceptable stderr: "! Your cgroup does not allow setting memory."
error_spam_test.go:89: acceptable stderr: "! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.2."
--- PASS: TestErrorSpam/setup (42.95s)

                                                
                                    
x
+
TestErrorSpam/start (0.94s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:214: Cleaning up 1 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20211019232835-6808 --log_dir /tmp/nospam-20211019232835-6808 start --dry-run
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20211019232835-6808 --log_dir /tmp/nospam-20211019232835-6808 start --dry-run
error_spam_test.go:180: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20211019232835-6808 --log_dir /tmp/nospam-20211019232835-6808 start --dry-run
--- PASS: TestErrorSpam/start (0.94s)

                                                
                                    
x
+
TestErrorSpam/status (1.11s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20211019232835-6808 --log_dir /tmp/nospam-20211019232835-6808 status
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20211019232835-6808 --log_dir /tmp/nospam-20211019232835-6808 status
error_spam_test.go:180: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20211019232835-6808 --log_dir /tmp/nospam-20211019232835-6808 status
--- PASS: TestErrorSpam/status (1.11s)

                                                
                                    
x
+
TestErrorSpam/pause (2.49s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20211019232835-6808 --log_dir /tmp/nospam-20211019232835-6808 pause
error_spam_test.go:157: (dbg) Done: out/minikube-linux-amd64 -p nospam-20211019232835-6808 --log_dir /tmp/nospam-20211019232835-6808 pause: (1.554691381s)
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20211019232835-6808 --log_dir /tmp/nospam-20211019232835-6808 pause
error_spam_test.go:180: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20211019232835-6808 --log_dir /tmp/nospam-20211019232835-6808 pause
--- PASS: TestErrorSpam/pause (2.49s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.44s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20211019232835-6808 --log_dir /tmp/nospam-20211019232835-6808 unpause
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20211019232835-6808 --log_dir /tmp/nospam-20211019232835-6808 unpause
error_spam_test.go:180: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20211019232835-6808 --log_dir /tmp/nospam-20211019232835-6808 unpause
--- PASS: TestErrorSpam/unpause (1.44s)

                                                
                                    
x
+
TestErrorSpam/stop (23.42s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20211019232835-6808 --log_dir /tmp/nospam-20211019232835-6808 stop
E1019 23:29:36.808461    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/addons-20211019231643-6808/client.crt: no such file or directory
error_spam_test.go:157: (dbg) Done: out/minikube-linux-amd64 -p nospam-20211019232835-6808 --log_dir /tmp/nospam-20211019232835-6808 stop: (23.151675765s)
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20211019232835-6808 --log_dir /tmp/nospam-20211019232835-6808 stop
error_spam_test.go:180: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20211019232835-6808 --log_dir /tmp/nospam-20211019232835-6808 stop
--- PASS: TestErrorSpam/stop (23.42s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1633: local sync path: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/files/etc/test/nested/copy/6808/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (70.48s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2015: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20211019232952-6808 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E1019 23:30:17.768887    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/addons-20211019231643-6808/client.crt: no such file or directory
functional_test.go:2015: (dbg) Done: out/minikube-linux-amd64 start -p functional-20211019232952-6808 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m10.482947273s)
--- PASS: TestFunctional/serial/StartWithProxy (70.48s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (15.48s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:600: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20211019232952-6808 --alsologtostderr -v=8
functional_test.go:600: (dbg) Done: out/minikube-linux-amd64 start -p functional-20211019232952-6808 --alsologtostderr -v=8: (15.476486029s)
functional_test.go:604: soft start took 15.47706723s for "functional-20211019232952-6808" cluster.
--- PASS: TestFunctional/serial/SoftStart (15.48s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:622: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:637: (dbg) Run:  kubectl --context functional-20211019232952-6808 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.37s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:983: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 cache add k8s.gcr.io/pause:3.1
functional_test.go:983: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 cache add k8s.gcr.io/pause:3.3
functional_test.go:983: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 cache add k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.37s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.38s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1014: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20211019232952-6808 /tmp/functional-20211019232952-68081634773885
functional_test.go:1026: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 cache add minikube-local-cache-test:functional-20211019232952-6808
functional_test.go:1026: (dbg) Done: out/minikube-linux-amd64 -p functional-20211019232952-6808 cache add minikube-local-cache-test:functional-20211019232952-6808: (1.090026945s)
functional_test.go:1031: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 cache delete minikube-local-cache-test:functional-20211019232952-6808
functional_test.go:1020: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20211019232952-6808
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.38s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1039: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1047: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.35s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1061: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.35s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.97s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1084: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 ssh sudo crictl rmi k8s.gcr.io/pause:latest
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1090: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20211019232952-6808 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (341.257928ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1095: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 cache reload
functional_test.go:1100: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.97s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:657: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 kubectl -- --context functional-20211019232952-6808 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:682: (dbg) Run:  out/kubectl --context functional-20211019232952-6808 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.2s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:698: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20211019232952-6808 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1019 23:31:39.689922    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/addons-20211019231643-6808/client.crt: no such file or directory
functional_test.go:698: (dbg) Done: out/minikube-linux-amd64 start -p functional-20211019232952-6808 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.195115884s)
functional_test.go:702: restart took 42.195228036s for "functional-20211019232952-6808" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (42.20s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:752: (dbg) Run:  kubectl --context functional-20211019232952-6808 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:767: etcd phase: Running
functional_test.go:777: etcd status: Ready
functional_test.go:767: kube-apiserver phase: Running
functional_test.go:777: kube-apiserver status: Ready
functional_test.go:767: kube-controller-manager phase: Running
functional_test.go:777: kube-controller-manager status: Ready
functional_test.go:767: kube-scheduler phase: Running
functional_test.go:777: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.03s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 logs
functional_test.go:1173: (dbg) Done: out/minikube-linux-amd64 -p functional-20211019232952-6808 logs: (1.02853636s)
--- PASS: TestFunctional/serial/LogsCmd (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1136: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1136: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1136: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20211019232952-6808 config get cpus: exit status 14 (68.039554ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1136: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 config set cpus 2
functional_test.go:1136: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 config get cpus
functional_test.go:1136: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 config unset cpus
functional_test.go:1136: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1136: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20211019232952-6808 config get cpus: exit status 14 (63.274543ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (2.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:847: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20211019232952-6808 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:852: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20211019232952-6808 --alsologtostderr -v=1] ...
helpers_test.go:506: unable to kill pid 45568: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (2.59s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:912: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20211019232952-6808 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:912: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20211019232952-6808 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (456.948254ms)

                                                
                                                
-- stdout --
	* [functional-20211019232952-6808] minikube v1.23.2 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube
	  - MINIKUBE_LOCATION=12739
	* Using the docker driver based on existing profile
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 23:32:15.372173   42400 out.go:297] Setting OutFile to fd 1 ...
	I1019 23:32:15.372236   42400 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1019 23:32:15.372240   42400 out.go:310] Setting ErrFile to fd 2...
	I1019 23:32:15.372244   42400 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1019 23:32:15.372343   42400 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/bin
	I1019 23:32:15.372596   42400 out.go:304] Setting JSON to false
	I1019 23:32:15.411127   42400 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":4497,"bootTime":1634681838,"procs":221,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I1019 23:32:15.411208   42400 start.go:121] virtualization: kvm guest
	I1019 23:32:15.604242   42400 out.go:176] * [functional-20211019232952-6808] minikube v1.23.2 on Debian 9.13 (kvm/amd64)
	I1019 23:32:15.606019   42400 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/kubeconfig
	I1019 23:32:15.607445   42400 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 23:32:15.608891   42400 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube
	I1019 23:32:15.610385   42400 out.go:176]   - MINIKUBE_LOCATION=12739
	I1019 23:32:15.610956   42400 config.go:176] Loaded profile config "functional-20211019232952-6808": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.2
	I1019 23:32:15.611517   42400 driver.go:343] Setting default libvirt URI to qemu:///system
	I1019 23:32:15.661621   42400 docker.go:132] docker version: linux-19.03.15
	I1019 23:32:15.661721   42400 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1019 23:32:15.762619   42400 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:189 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:42 SystemTime:2021-10-19 23:32:15.704980812 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1019 23:32:15.762696   42400 docker.go:237] overlay module found
	I1019 23:32:15.766043   42400 out.go:176] * Using the docker driver based on existing profile
	I1019 23:32:15.766066   42400 start.go:278] selected driver: docker
	I1019 23:32:15.766072   42400 start.go:751] validating driver "docker" against &{Name:functional-20211019232952-6808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.27-1633027942-12081@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.2 ClusterName:functional-20211019232952-6808 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.22.2 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registr
y:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host}
	I1019 23:32:15.766197   42400 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W1019 23:32:15.766238   42400 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W1019 23:32:15.766256   42400 out.go:241] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I1019 23:32:15.767964   42400 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I1019 23:32:15.770069   42400 out.go:176] 
	W1019 23:32:15.770287   42400 out.go:241] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1019 23:32:15.771973   42400 out.go:176] 

                                                
                                                
** /stderr **
functional_test.go:929: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20211019232952-6808 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:954: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20211019232952-6808 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:954: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20211019232952-6808 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (242.180543ms)

                                                
                                                
-- stdout --
	* [functional-20211019232952-6808] minikube v1.23.2 sur Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube
	  - MINIKUBE_LOCATION=12739
	* Utilisation du pilote docker basé sur le profil existant
	  - Plus d'informations: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 23:32:13.492095   41941 out.go:297] Setting OutFile to fd 1 ...
	I1019 23:32:13.492204   41941 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1019 23:32:13.492213   41941 out.go:310] Setting ErrFile to fd 2...
	I1019 23:32:13.492216   41941 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1019 23:32:13.492338   41941 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/bin
	I1019 23:32:13.492517   41941 out.go:304] Setting JSON to false
	I1019 23:32:13.529766   41941 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":4496,"bootTime":1634681838,"procs":216,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I1019 23:32:13.529838   41941 start.go:121] virtualization: kvm guest
	I1019 23:32:13.532030   41941 out.go:176] * [functional-20211019232952-6808] minikube v1.23.2 sur Debian 9.13 (kvm/amd64)
	I1019 23:32:13.533526   41941 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/kubeconfig
	I1019 23:32:13.534945   41941 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 23:32:13.536346   41941 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube
	I1019 23:32:13.537699   41941 out.go:176]   - MINIKUBE_LOCATION=12739
	I1019 23:32:13.538069   41941 config.go:176] Loaded profile config "functional-20211019232952-6808": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.2
	I1019 23:32:13.538426   41941 driver.go:343] Setting default libvirt URI to qemu:///system
	I1019 23:32:13.587030   41941 docker.go:132] docker version: linux-19.03.15
	I1019 23:32:13.587112   41941 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1019 23:32:13.668963   41941 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:189 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:41 SystemTime:2021-10-19 23:32:13.623363541 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1019 23:32:13.669070   41941 docker.go:237] overlay module found
	I1019 23:32:13.670845   41941 out.go:176] * Utilisation du pilote docker basé sur le profil existant
	I1019 23:32:13.670868   41941 start.go:278] selected driver: docker
	I1019 23:32:13.670874   41941 start.go:751] validating driver "docker" against &{Name:functional-20211019232952-6808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.27-1633027942-12081@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.2 ClusterName:functional-20211019232952-6808 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.22.2 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registr
y:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host}
	I1019 23:32:13.670997   41941 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W1019 23:32:13.671033   41941 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W1019 23:32:13.671053   41941 out.go:241] ! Votre groupe de contrôle ne permet pas de définir la mémoire.
	! Votre groupe de contrôle ne permet pas de définir la mémoire.
	I1019 23:32:13.672440   41941 out.go:176]   - Plus d'informations: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I1019 23:32:13.674297   41941 out.go:176] 
	W1019 23:32:13.674377   41941 out.go:241] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1019 23:32:13.675978   41941 out.go:176] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:796: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 status
functional_test.go:802: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:814: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (11.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1372: (dbg) Run:  kubectl --context functional-20211019232952-6808 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1378: (dbg) Run:  kubectl --context functional-20211019232952-6808 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1383: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-6cbfcd7cbc-n52dx" [9a8d20bd-cb48-435f-9428-fe75e55ea0aa] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:342: "hello-node-6cbfcd7cbc-n52dx" [9a8d20bd-cb48-435f-9428-fe75e55ea0aa] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1383: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 9.006315954s
functional_test.go:1388: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1388: (dbg) Done: out/minikube-linux-amd64 -p functional-20211019232952-6808 service list: (1.376704503s)
functional_test.go:1401: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 service --namespace=default --https --url hello-node

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1410: found endpoint: https://192.168.49.2:31615
functional_test.go:1421: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 service hello-node --url --format={{.IP}}
functional_test.go:1430: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 service hello-node --url
functional_test.go:1436: found endpoint for hello-node: http://192.168.49.2:31615
functional_test.go:1447: Attempting to fetch http://192.168.49.2:31615 ...
functional_test.go:1467: http://192.168.49.2:31615: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-6cbfcd7cbc-n52dx

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31615
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmd (11.70s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1482: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 addons list
functional_test.go:1494: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:342: "storage-provisioner" [4de9e5f9-67b1-460f-a461-9b4aba1dcd35] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.008389295s
functional_test_pvc_test.go:50: (dbg) Run:  kubectl --context functional-20211019232952-6808 get storageclass -o=json
functional_test_pvc_test.go:70: (dbg) Run:  kubectl --context functional-20211019232952-6808 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:77: (dbg) Run:  kubectl --context functional-20211019232952-6808 get pvc myclaim -o=json
functional_test_pvc_test.go:126: (dbg) Run:  kubectl --context functional-20211019232952-6808 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [3787978e-425e-4d6f-bd8a-1c2590766854] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [3787978e-425e-4d6f-bd8a-1c2590766854] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [3787978e-425e-4d6f-bd8a-1c2590766854] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.007864012s
functional_test_pvc_test.go:101: (dbg) Run:  kubectl --context functional-20211019232952-6808 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:107: (dbg) Run:  kubectl --context functional-20211019232952-6808 delete -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:126: (dbg) Run:  kubectl --context functional-20211019232952-6808 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [b2459845-b23c-411a-becf-a26c4936b1ba] Pending
2021/10/19 23:32:34 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [b2459845-b23c-411a-becf-a26c4936b1ba] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [b2459845-b23c-411a-becf-a26c4936b1ba] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.006098381s
functional_test_pvc_test.go:115: (dbg) Run:  kubectl --context functional-20211019232952-6808 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.02s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1517: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1534: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 cp testdata/cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:548: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 ssh "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (20.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1571: (dbg) Run:  kubectl --context functional-20211019232952-6808 replace --force -f testdata/mysql.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1577: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-9bbbc5bbb-fct6x" [0c371058-d544-4592-a2fb-ac0a349f137f] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-9bbbc5bbb-fct6x" [0c371058-d544-4592-a2fb-ac0a349f137f] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1577: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 16.055156278s
functional_test.go:1585: (dbg) Run:  kubectl --context functional-20211019232952-6808 exec mysql-9bbbc5bbb-fct6x -- mysql -ppassword -e "show databases;"
functional_test.go:1585: (dbg) Non-zero exit: kubectl --context functional-20211019232952-6808 exec mysql-9bbbc5bbb-fct6x -- mysql -ppassword -e "show databases;": exit status 1 (269.772357ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1585: (dbg) Run:  kubectl --context functional-20211019232952-6808 exec mysql-9bbbc5bbb-fct6x -- mysql -ppassword -e "show databases;"
functional_test.go:1585: (dbg) Non-zero exit: kubectl --context functional-20211019232952-6808 exec mysql-9bbbc5bbb-fct6x -- mysql -ppassword -e "show databases;": exit status 1 (230.210794ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1585: (dbg) Run:  kubectl --context functional-20211019232952-6808 exec mysql-9bbbc5bbb-fct6x -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (20.06s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1707: Checking for existence of /etc/test/nested/copy/6808/hosts within VM
functional_test.go:1709: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 ssh "sudo cat /etc/test/nested/copy/6808/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1714: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1750: Checking for existence of /etc/ssl/certs/6808.pem within VM
functional_test.go:1751: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 ssh "sudo cat /etc/ssl/certs/6808.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1750: Checking for existence of /usr/share/ca-certificates/6808.pem within VM
functional_test.go:1751: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 ssh "sudo cat /usr/share/ca-certificates/6808.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1750: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1751: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 ssh "sudo cat /etc/ssl/certs/51391683.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1777: Checking for existence of /etc/ssl/certs/68082.pem within VM
functional_test.go:1778: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 ssh "sudo cat /etc/ssl/certs/68082.pem"
functional_test.go:1777: Checking for existence of /usr/share/ca-certificates/68082.pem within VM
functional_test.go:1778: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 ssh "sudo cat /usr/share/ca-certificates/68082.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1777: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1778: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.35s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:213: (dbg) Run:  kubectl --context functional-20211019232952-6808 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1805: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 ssh "sudo systemctl is-active docker"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1805: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20211019232952-6808 ssh "sudo systemctl is-active docker": exit status 1 (391.082926ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:1805: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1805: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20211019232952-6808 ssh "sudo systemctl is-active crio": exit status 1 (373.49463ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1213: (dbg) Run:  out/minikube-linux-amd64 profile lis

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1218: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (14.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:76: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20211019232952-6808 /tmp/mounttest1994212275:/mount-9p --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:110: wrote "test-1634686329818580040" to /tmp/mounttest1994212275/created-by-test
functional_test_mount_test.go:110: wrote "test-1634686329818580040" to /tmp/mounttest1994212275/created-by-test-removed-by-pod
functional_test_mount_test.go:110: wrote "test-1634686329818580040" to /tmp/mounttest1994212275/test-1634686329818580040
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:118: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20211019232952-6808 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (426.834508ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:136: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 19 23:32 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 19 23:32 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 19 23:32 test-1634686329818580040
functional_test_mount_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 ssh cat /mount-9p/test-1634686329818580040

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:151: (dbg) Run:  kubectl --context functional-20211019232952-6808 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:156: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:342: "busybox-mount" [9d35ea22-cc84-4772-aa7e-8092a95182f8] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [9d35ea22-cc84-4772-aa7e-8092a95182f8] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [9d35ea22-cc84-4772-aa7e-8092a95182f8] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:156: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 11.008885005s
functional_test_mount_test.go:172: (dbg) Run:  kubectl --context functional-20211019232952-6808 logs busybox-mount
functional_test_mount_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 ssh stat /mount-9p/created-by-test

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:93: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:97: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20211019232952-6808 /tmp/mounttest1994212275:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (14.94s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2037: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2051: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 version -o=json --components

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2051: (dbg) Done: out/minikube-linux-amd64 -p functional-20211019232952-6808 version -o=json --components: (1.059593143s)
--- PASS: TestFunctional/parallel/Version/components (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1253: (dbg) Run:  out/minikube-linux-amd64 profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1258: Took "392.137423ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1267: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1272: Took "63.566101ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageList (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageList
=== PAUSE TestFunctional/parallel/ImageCommands/ImageList

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageList
functional_test.go:241: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageList
functional_test.go:246: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20211019232952-6808 image ls:
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.5
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.22.2
k8s.gcr.io/kube-proxy:v1.22.2
k8s.gcr.io/kube-controller-manager:v1.22.2
k8s.gcr.io/kube-apiserver:v1.22.2
k8s.gcr.io/etcd:3.5.0-0
k8s.gcr.io/echoserver:1.8
k8s.gcr.io/coredns/coredns:v1.8.4
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-20211019232952-6808
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-20211019232952-6808
docker.io/library/busybox:1.28.4-glibc
docker.io/kubernetesui/metrics-scraper:v1.0.7
docker.io/kubernetesui/dashboard:v2.3.1
docker.io/kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestFunctional/parallel/ImageCommands/ImageList (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 ssh pgrep buildkitd

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20211019232952-6808 ssh pgrep buildkitd: exit status 1 (350.539792ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:271: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 image build -t localhost/my-image:functional-20211019232952-6808 testdata/build

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:271: (dbg) Done: out/minikube-linux-amd64 -p functional-20211019232952-6808 image build -t localhost/my-image:functional-20211019232952-6808 testdata/build: (2.237382375s)
functional_test.go:279: (dbg) Stderr: out/minikube-linux-amd64 -p functional-20211019232952-6808 image build -t localhost/my-image:functional-20211019232952-6808 testdata/build:
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 77B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load .dockerignore
#2 transferring context: 2B done
#2 DONE 0.0s

                                                
                                                
#3 [internal] load metadata for docker.io/library/busybox:latest
#3 DONE 0.8s

                                                
                                                
#6 [internal] load build context
#6 transferring context: 62B done
#6 DONE 0.0s

                                                
                                                
#4 [1/3] FROM docker.io/library/busybox@sha256:f7ca5a32c10d51aeda3b4d01c61c6061f497893d7f6628b92f822f7117182a57
#4 resolve docker.io/library/busybox@sha256:f7ca5a32c10d51aeda3b4d01c61c6061f497893d7f6628b92f822f7117182a57 0.0s done
#4 extracting sha256:24fb2886d6f6c5d16481dd7608b47e78a8e92a13d6e64d87d57cb16d5f766d63 0.1s done
#4 DONE 0.2s

                                                
                                                
#5 [2/3] RUN true
#5 DONE 0.4s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.1s done
#8 exporting manifest sha256:b78e9d51a68b09f5ae20c38914dd562ebd01969101af927533a94853b354a274 done
#8 exporting config sha256:52eb1d0a6a10eddf7937e24f368fdfcc75108296746ef82cf95963d6d97ef17d done
#8 naming to localhost/my-image:functional-20211019232952-6808 done
#8 DONE 0.1s
functional_test.go:389: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:298: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:303: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-20211019232952-6808
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1304: (dbg) Run:  out/minikube-linux-amd64 profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1309: Took "447.922396ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1317: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1322: Took "66.096047ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:1897: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:1897: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:1897: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 image load --daemon gcr.io/google-containers/addon-resizer:functional-20211019232952-6808

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p functional-20211019232952-6808 image load --daemon gcr.io/google-containers/addon-resizer:functional-20211019232952-6808: (5.403375333s)
functional_test.go:389: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:321: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 image save gcr.io/google-containers/addon-resizer:functional-20211019232952-6808 /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar
functional_test.go:321: (dbg) Done: out/minikube-linux-amd64 -p functional-20211019232952-6808 image save gcr.io/google-containers/addon-resizer:functional-20211019232952-6808 /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar: (1.341749849s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:333: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 image rm gcr.io/google-containers/addon-resizer:functional-20211019232952-6808
functional_test.go:389: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar
functional_test.go:350: (dbg) Done: out/minikube-linux-amd64 -p functional-20211019232952-6808 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar: (1.712900453s)
functional_test.go:389: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:360: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-20211019232952-6808
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 image save --daemon gcr.io/google-containers/addon-resizer:functional-20211019232952-6808

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p functional-20211019232952-6808 image save --daemon gcr.io/google-containers/addon-resizer:functional-20211019232952-6808: (1.457422964s)
functional_test.go:370: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-20211019232952-6808
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-linux-amd64 -p functional-20211019232952-6808 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-20211019232952-6808 apply -f testdata/testsvc.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:342: "nginx-svc" [86a022dd-efdd-468a-b84c-3ecc3a141d5c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:342: "nginx-svc" [86a022dd-efdd-468a-b84c-3ecc3a141d5c] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.005805935s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.22s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:225: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20211019232952-6808 /tmp/mounttest3055887832:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20211019232952-6808 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (349.895882ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:269: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 ssh -- ls -la /mount-9p
functional_test_mount_test.go:273: guest mount directory contents
total 0
functional_test_mount_test.go:275: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20211019232952-6808 /tmp/mounttest3055887832:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:276: reading mount text
functional_test_mount_test.go:290: done reading mount text
functional_test_mount_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211019232952-6808 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:242: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20211019232952-6808 ssh "sudo umount -f /mount-9p": exit status 1 (340.316412ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:244: "out/minikube-linux-amd64 -p functional-20211019232952-6808 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:246: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20211019232952-6808 /tmp/mounttest3055887832:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20211019232952-6808 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://10.98.7.75 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-linux-amd64 -p functional-20211019232952-6808 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.1s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:184: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:184: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-20211019232952-6808
--- PASS: TestFunctional/delete_addon-resizer_images (0.10s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:192: (dbg) Run:  docker rmi -f localhost/my-image:functional-20211019232952-6808
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:200: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20211019232952-6808
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestJSONOutput/start/Command (70.52s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-20211019233246-6808 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E1019 23:33:55.845870    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/addons-20211019231643-6808/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-20211019233246-6808 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (1m10.523598858s)
--- PASS: TestJSONOutput/start/Command (70.52s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-20211019233246-6808 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-20211019233246-6808 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (23.94s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-20211019233246-6808 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-20211019233246-6808 --output=json --user=testUser: (23.937351108s)
--- PASS: TestJSONOutput/stop/Command (23.94s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.33s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-20211019233428-6808 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-20211019233428-6808 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (91.448181ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"13b3df9a-cf3a-4013-811b-4f0e5ec57f9f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-20211019233428-6808] minikube v1.23.2 on Debian 9.13 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e34be40d-2807-48be-b979-0d8dac33f75b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/kubeconfig"}}
	{"specversion":"1.0","id":"14dda892-a22a-45d6-8382-864c6a9d5236","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"5143e55d-fb86-46db-9dac-f4abecf0f645","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube"}}
	{"specversion":"1.0","id":"601b1b84-ee39-41b5-be53-97664cd4c9ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=12739"}}
	{"specversion":"1.0","id":"9d247b16-3133-475e-8c2b-4a0fa0f311cb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-20211019233428-6808" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-20211019233428-6808
--- PASS: TestErrorJSONOutput (0.33s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (29.66s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:58: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-20211019233428-6808 --network=
kic_custom_network_test.go:58: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20211019233428-6808 --network=: (27.112944744s)
kic_custom_network_test.go:102: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-20211019233428-6808" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-20211019233428-6808
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20211019233428-6808: (2.507100434s)
--- PASS: TestKicCustomNetwork/create_custom_network (29.66s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (23.8s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:58: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-20211019233458-6808 --network=bridge
kic_custom_network_test.go:58: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20211019233458-6808 --network=bridge: (21.42675647s)
kic_custom_network_test.go:102: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-20211019233458-6808" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-20211019233458-6808
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20211019233458-6808: (2.330315831s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (23.80s)

                                                
                                    
x
+
TestKicExistingNetwork (24.26s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:102: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:94: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-20211019233522-6808 --network=existing-network
kic_custom_network_test.go:94: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-20211019233522-6808 --network=existing-network: (21.536204052s)
helpers_test.go:175: Cleaning up "existing-network-20211019233522-6808" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-20211019233522-6808
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-20211019233522-6808: (2.477902111s)
--- PASS: TestKicExistingNetwork (24.26s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (43.7s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:77: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-20211019233546-6808 --memory=2048 --mount --driver=docker  --container-runtime=containerd
mount_start_test.go:77: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-20211019233546-6808 --memory=2048 --mount --driver=docker  --container-runtime=containerd: (43.700661326s)
--- PASS: TestMountStart/serial/StartWithMountFirst (43.70s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (43.92s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:77: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-20211019233546-6808 --memory=2048 --mount --driver=docker  --container-runtime=containerd
E1019 23:37:11.993543    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/functional-20211019232952-6808/client.crt: no such file or directory
E1019 23:37:11.998844    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/functional-20211019232952-6808/client.crt: no such file or directory
E1019 23:37:12.009159    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/functional-20211019232952-6808/client.crt: no such file or directory
E1019 23:37:12.029436    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/functional-20211019232952-6808/client.crt: no such file or directory
E1019 23:37:12.069808    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/functional-20211019232952-6808/client.crt: no such file or directory
E1019 23:37:12.150209    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/functional-20211019232952-6808/client.crt: no such file or directory
E1019 23:37:12.311013    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/functional-20211019232952-6808/client.crt: no such file or directory
E1019 23:37:12.631924    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/functional-20211019232952-6808/client.crt: no such file or directory
E1019 23:37:13.273005    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/functional-20211019232952-6808/client.crt: no such file or directory
mount_start_test.go:77: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-20211019233546-6808 --memory=2048 --mount --driver=docker  --container-runtime=containerd: (43.915469838s)
--- PASS: TestMountStart/serial/StartWithMountSecond (43.92s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.33s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:88: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-20211019233546-6808 ssh ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.33s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.33s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:88: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20211019233546-6808 ssh ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.33s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.85s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:130: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-20211019233546-6808 --alsologtostderr -v=5
E1019 23:37:14.553989    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/functional-20211019232952-6808/client.crt: no such file or directory
E1019 23:37:17.114400    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/functional-20211019232952-6808/client.crt: no such file or directory
pause_test.go:130: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-20211019233546-6808 --alsologtostderr -v=5: (2.848437211s)
--- PASS: TestMountStart/serial/DeleteFirst (2.85s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:88: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20211019233546-6808 ssh ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.35s)

                                                
                                    
x
+
TestMountStart/serial/Stop (20.51s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:99: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-20211019233546-6808
E1019 23:37:22.235540    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/functional-20211019232952-6808/client.crt: no such file or directory
E1019 23:37:32.475984    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/functional-20211019232952-6808/client.crt: no such file or directory
mount_start_test.go:99: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-20211019233546-6808: (20.506141071s)
--- PASS: TestMountStart/serial/Stop (20.51s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (34.15s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-20211019233546-6808
E1019 23:37:52.957154    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/functional-20211019232952-6808/client.crt: no such file or directory
mount_start_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-20211019233546-6808: (34.154852742s)
--- PASS: TestMountStart/serial/RestartStopped (34.15s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:88: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20211019233546-6808 ssh ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.32s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (110.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:82: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20211019233815-6808 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E1019 23:38:33.917838    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/functional-20211019232952-6808/client.crt: no such file or directory
E1019 23:38:55.845839    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/addons-20211019231643-6808/client.crt: no such file or directory
E1019 23:39:55.838471    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/functional-20211019232952-6808/client.crt: no such file or directory
multinode_test.go:82: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20211019233815-6808 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m49.62535911s)
multinode_test.go:88: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211019233815-6808 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (110.20s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:463: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20211019233815-6808 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:468: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20211019233815-6808 -- rollout status deployment/busybox
multinode_test.go:468: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-20211019233815-6808 -- rollout status deployment/busybox: (2.987615726s)
multinode_test.go:474: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20211019233815-6808 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20211019233815-6808 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:494: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20211019233815-6808 -- exec busybox-84b6686758-fmbt4 -- nslookup kubernetes.io
multinode_test.go:494: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20211019233815-6808 -- exec busybox-84b6686758-s5lf6 -- nslookup kubernetes.io
multinode_test.go:504: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20211019233815-6808 -- exec busybox-84b6686758-fmbt4 -- nslookup kubernetes.default
multinode_test.go:504: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20211019233815-6808 -- exec busybox-84b6686758-s5lf6 -- nslookup kubernetes.default
multinode_test.go:512: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20211019233815-6808 -- exec busybox-84b6686758-fmbt4 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:512: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20211019233815-6808 -- exec busybox-84b6686758-s5lf6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.87s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:522: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20211019233815-6808 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:530: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20211019233815-6808 -- exec busybox-84b6686758-fmbt4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:538: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20211019233815-6808 -- exec busybox-84b6686758-fmbt4 -- sh -c "ping -c 1 192.168.49.1"
multinode_test.go:530: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20211019233815-6808 -- exec busybox-84b6686758-s5lf6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:538: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20211019233815-6808 -- exec busybox-84b6686758-s5lf6 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (41.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:107: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20211019233815-6808 -v 3 --alsologtostderr
multinode_test.go:107: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-20211019233815-6808 -v 3 --alsologtostderr: (41.207048813s)
multinode_test.go:113: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211019233815-6808 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (41.97s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:129: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (2.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:170: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211019233815-6808 status --output json --alsologtostderr
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211019233815-6808 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:548: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211019233815-6808 ssh "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211019233815-6808 cp testdata/cp-test.txt multinode-20211019233815-6808-m02:/home/docker/cp-test.txt
helpers_test.go:548: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211019233815-6808 ssh -n multinode-20211019233815-6808-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211019233815-6808 cp testdata/cp-test.txt multinode-20211019233815-6808-m03:/home/docker/cp-test.txt
helpers_test.go:548: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211019233815-6808 ssh -n multinode-20211019233815-6808-m03 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestMultiNode/serial/CopyFile (2.82s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (21.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:192: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211019233815-6808 node stop m03
multinode_test.go:192: (dbg) Done: out/minikube-linux-amd64 -p multinode-20211019233815-6808 node stop m03: (20.552050787s)
multinode_test.go:198: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211019233815-6808 status
multinode_test.go:198: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20211019233815-6808 status: exit status 7 (613.155223ms)

                                                
                                                
-- stdout --
	multinode-20211019233815-6808
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20211019233815-6808-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20211019233815-6808-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:205: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211019233815-6808 status --alsologtostderr
multinode_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20211019233815-6808 status --alsologtostderr: exit status 7 (647.099996ms)

                                                
                                                
-- stdout --
	multinode-20211019233815-6808
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20211019233815-6808-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20211019233815-6808-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 23:41:17.770341   84441 out.go:297] Setting OutFile to fd 1 ...
	I1019 23:41:17.770539   84441 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1019 23:41:17.770548   84441 out.go:310] Setting ErrFile to fd 2...
	I1019 23:41:17.770552   84441 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1019 23:41:17.770645   84441 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/bin
	I1019 23:41:17.770792   84441 out.go:304] Setting JSON to false
	I1019 23:41:17.770812   84441 mustload.go:65] Loading cluster: multinode-20211019233815-6808
	I1019 23:41:17.771066   84441 config.go:176] Loaded profile config "multinode-20211019233815-6808": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.2
	I1019 23:41:17.771079   84441 status.go:253] checking status of multinode-20211019233815-6808 ...
	I1019 23:41:17.771416   84441 cli_runner.go:115] Run: docker container inspect multinode-20211019233815-6808 --format={{.State.Status}}
	I1019 23:41:17.811630   84441 status.go:328] multinode-20211019233815-6808 host status = "Running" (err=<nil>)
	I1019 23:41:17.811658   84441 host.go:66] Checking if "multinode-20211019233815-6808" exists ...
	I1019 23:41:17.811954   84441 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20211019233815-6808
	I1019 23:41:17.851385   84441 host.go:66] Checking if "multinode-20211019233815-6808" exists ...
	I1019 23:41:17.851651   84441 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 23:41:17.851686   84441 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211019233815-6808
	I1019 23:41:17.891750   84441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32822 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/multinode-20211019233815-6808/id_rsa Username:docker}
	I1019 23:41:17.968977   84441 ssh_runner.go:152] Run: systemctl --version
	I1019 23:41:17.972579   84441 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
	I1019 23:41:17.981280   84441 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1019 23:41:18.060546   84441 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:188 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:45 SystemTime:2021-10-19 23:41:18.01700046 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1019 23:41:18.061380   84441 kubeconfig.go:92] found "multinode-20211019233815-6808" server: "https://192.168.49.2:8443"
	I1019 23:41:18.061403   84441 api_server.go:165] Checking apiserver status ...
	I1019 23:41:18.061435   84441 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 23:41:18.079072   84441 ssh_runner.go:152] Run: sudo egrep ^[0-9]+:freezer: /proc/1171/cgroup
	I1019 23:41:18.086452   84441 api_server.go:181] apiserver freezer: "4:freezer:/docker/085f8c2cf2de993fab729b936232ac993fbc8be1ef05ca0caa9c82dcb2f778dd/kubepods/burstable/pode67b89f1e1cd5aba2f847027cf8201aa/81676e0b0a83095afe283d972ffa69d71b1b16cd6e29df4876e1c8c2a5b592a6"
	I1019 23:41:18.086514   84441 ssh_runner.go:152] Run: sudo cat /sys/fs/cgroup/freezer/docker/085f8c2cf2de993fab729b936232ac993fbc8be1ef05ca0caa9c82dcb2f778dd/kubepods/burstable/pode67b89f1e1cd5aba2f847027cf8201aa/81676e0b0a83095afe283d972ffa69d71b1b16cd6e29df4876e1c8c2a5b592a6/freezer.state
	I1019 23:41:18.092455   84441 api_server.go:203] freezer state: "THAWED"
	I1019 23:41:18.092475   84441 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1019 23:41:18.097197   84441 api_server.go:266] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1019 23:41:18.097216   84441 status.go:419] multinode-20211019233815-6808 apiserver status = Running (err=<nil>)
	I1019 23:41:18.097227   84441 status.go:255] multinode-20211019233815-6808 status: &{Name:multinode-20211019233815-6808 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1019 23:41:18.097246   84441 status.go:253] checking status of multinode-20211019233815-6808-m02 ...
	I1019 23:41:18.097498   84441 cli_runner.go:115] Run: docker container inspect multinode-20211019233815-6808-m02 --format={{.State.Status}}
	I1019 23:41:18.135462   84441 status.go:328] multinode-20211019233815-6808-m02 host status = "Running" (err=<nil>)
	I1019 23:41:18.135484   84441 host.go:66] Checking if "multinode-20211019233815-6808-m02" exists ...
	I1019 23:41:18.135707   84441 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20211019233815-6808-m02
	I1019 23:41:18.173330   84441 host.go:66] Checking if "multinode-20211019233815-6808-m02" exists ...
	I1019 23:41:18.173567   84441 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 23:41:18.173598   84441 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211019233815-6808-m02
	I1019 23:41:18.211466   84441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32827 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/machines/multinode-20211019233815-6808-m02/id_rsa Username:docker}
	I1019 23:41:18.315928   84441 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
	I1019 23:41:18.324306   84441 status.go:255] multinode-20211019233815-6808-m02 status: &{Name:multinode-20211019233815-6808-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1019 23:41:18.324336   84441 status.go:253] checking status of multinode-20211019233815-6808-m03 ...
	I1019 23:41:18.324597   84441 cli_runner.go:115] Run: docker container inspect multinode-20211019233815-6808-m03 --format={{.State.Status}}
	I1019 23:41:18.365872   84441 status.go:328] multinode-20211019233815-6808-m03 host status = "Stopped" (err=<nil>)
	I1019 23:41:18.365894   84441 status.go:341] host is not running, skipping remaining checks
	I1019 23:41:18.365902   84441 status.go:255] multinode-20211019233815-6808-m03 status: &{Name:multinode-20211019233815-6808-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (21.81s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (35.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:226: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211019233815-6808 node start m03 --alsologtostderr
multinode_test.go:236: (dbg) Done: out/minikube-linux-amd64 -p multinode-20211019233815-6808 node start m03 --alsologtostderr: (34.909151856s)
multinode_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211019233815-6808 status
multinode_test.go:257: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (35.79s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (188.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:265: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20211019233815-6808
multinode_test.go:272: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-20211019233815-6808
E1019 23:42:11.995241    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/functional-20211019232952-6808/client.crt: no such file or directory
E1019 23:42:39.680740    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/functional-20211019232952-6808/client.crt: no such file or directory
multinode_test.go:272: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-20211019233815-6808: (1m1.510667751s)
multinode_test.go:277: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20211019233815-6808 --wait=true -v=8 --alsologtostderr
E1019 23:43:55.845632    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/addons-20211019231643-6808/client.crt: no such file or directory
multinode_test.go:277: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20211019233815-6808 --wait=true -v=8 --alsologtostderr: (2m7.155412945s)
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20211019233815-6808
--- PASS: TestMultiNode/serial/RestartKeepsNodes (188.77s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (25.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211019233815-6808 node delete m03
E1019 23:45:18.891732    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/addons-20211019231643-6808/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 -p multinode-20211019233815-6808 node delete m03: (24.284703021s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211019233815-6808 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  docker volume ls
multinode_test.go:406: (dbg) Run:  kubectl get nodes
multinode_test.go:414: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (25.03s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (41.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:296: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211019233815-6808 stop
multinode_test.go:296: (dbg) Done: out/minikube-linux-amd64 -p multinode-20211019233815-6808 stop: (41.25966241s)
multinode_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211019233815-6808 status
multinode_test.go:302: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20211019233815-6808 status: exit status 7 (126.934981ms)

                                                
                                                
-- stdout --
	multinode-20211019233815-6808
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20211019233815-6808-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:309: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211019233815-6808 status --alsologtostderr
multinode_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20211019233815-6808 status --alsologtostderr: exit status 7 (123.720728ms)

                                                
                                                
-- stdout --
	multinode-20211019233815-6808
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20211019233815-6808-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 23:46:09.385740   95543 out.go:297] Setting OutFile to fd 1 ...
	I1019 23:46:09.385951   95543 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1019 23:46:09.385963   95543 out.go:310] Setting ErrFile to fd 2...
	I1019 23:46:09.385969   95543 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1019 23:46:09.386084   95543 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/bin
	I1019 23:46:09.386239   95543 out.go:304] Setting JSON to false
	I1019 23:46:09.386262   95543 mustload.go:65] Loading cluster: multinode-20211019233815-6808
	I1019 23:46:09.386569   95543 config.go:176] Loaded profile config "multinode-20211019233815-6808": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.2
	I1019 23:46:09.386584   95543 status.go:253] checking status of multinode-20211019233815-6808 ...
	I1019 23:46:09.386942   95543 cli_runner.go:115] Run: docker container inspect multinode-20211019233815-6808 --format={{.State.Status}}
	I1019 23:46:09.424016   95543 status.go:328] multinode-20211019233815-6808 host status = "Stopped" (err=<nil>)
	I1019 23:46:09.424037   95543 status.go:341] host is not running, skipping remaining checks
	I1019 23:46:09.424042   95543 status.go:255] multinode-20211019233815-6808 status: &{Name:multinode-20211019233815-6808 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1019 23:46:09.424065   95543 status.go:253] checking status of multinode-20211019233815-6808-m02 ...
	I1019 23:46:09.424303   95543 cli_runner.go:115] Run: docker container inspect multinode-20211019233815-6808-m02 --format={{.State.Status}}
	I1019 23:46:09.459892   95543 status.go:328] multinode-20211019233815-6808-m02 host status = "Stopped" (err=<nil>)
	I1019 23:46:09.459912   95543 status.go:341] host is not running, skipping remaining checks
	I1019 23:46:09.459917   95543 status.go:255] multinode-20211019233815-6808-m02 status: &{Name:multinode-20211019233815-6808-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (41.51s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (109.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:326: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:336: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20211019233815-6808 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E1019 23:47:11.993276    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/functional-20211019232952-6808/client.crt: no such file or directory
multinode_test.go:336: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20211019233815-6808 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m48.699878316s)
multinode_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211019233815-6808 status --alsologtostderr
multinode_test.go:356: (dbg) Run:  kubectl get nodes
multinode_test.go:364: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (109.43s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (46.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:425: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20211019233815-6808
multinode_test.go:434: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20211019233815-6808-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:434: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-20211019233815-6808-m02 --driver=docker  --container-runtime=containerd: exit status 14 (97.319657ms)

                                                
                                                
-- stdout --
	* [multinode-20211019233815-6808-m02] minikube v1.23.2 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube
	  - MINIKUBE_LOCATION=12739
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20211019233815-6808-m02' is duplicated with machine name 'multinode-20211019233815-6808-m02' in profile 'multinode-20211019233815-6808'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:442: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20211019233815-6808-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:442: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20211019233815-6808-m03 --driver=docker  --container-runtime=containerd: (43.136495123s)
multinode_test.go:449: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20211019233815-6808
multinode_test.go:449: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-20211019233815-6808: exit status 80 (325.573606ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20211019233815-6808
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20211019233815-6808-m03 already exists in multinode-20211019233815-6808-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:454: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-20211019233815-6808-m03
multinode_test.go:454: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-20211019233815-6808-m03: (2.823536278s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (46.43s)

                                                
                                    
x
+
TestPreload (129.9s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20211019234850-6808 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.17.0
E1019 23:48:55.845797    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/addons-20211019231643-6808/client.crt: no such file or directory
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20211019234850-6808 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.17.0: (1m24.885210178s)
preload_test.go:62: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20211019234850-6808 -- sudo crictl pull busybox
preload_test.go:62: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-20211019234850-6808 -- sudo crictl pull busybox: (1.552410631s)
preload_test.go:72: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20211019234850-6808 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd --kubernetes-version=v1.17.3
preload_test.go:72: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20211019234850-6808 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd --kubernetes-version=v1.17.3: (40.199738725s)
preload_test.go:81: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20211019234850-6808 -- sudo crictl image ls
helpers_test.go:175: Cleaning up "test-preload-20211019234850-6808" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-20211019234850-6808
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-20211019234850-6808: (2.919116201s)
--- PASS: TestPreload (129.90s)

                                                
                                    
x
+
TestInsufficientStorage (13.37s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-20211019235229-6808 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:51: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-20211019235229-6808 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (6.387840216s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5b19fa5d-5932-4839-9a35-dac4cc54d3f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-20211019235229-6808] minikube v1.23.2 on Debian 9.13 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"84d132a4-d9b3-43d9-8697-e63ce3ae91ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/kubeconfig"}}
	{"specversion":"1.0","id":"99275046-e73e-4913-83e5-b56db16e6003","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"c02fe6f1-efc8-43c3-9f04-dedb7c8e639d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube"}}
	{"specversion":"1.0","id":"68a9316d-f368-4b19-b72a-8cdcfb1dabdf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=12739"}}
	{"specversion":"1.0","id":"718d3e04-ab48-40eb-8aca-65ea0088af88","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"e3963fe1-7802-4c59-83ff-2741a9d5971f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"5c7b03e8-4310-4b67-9bfd-a9e9a7cd22f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"Your cgroup does not allow setting memory."}}
	{"specversion":"1.0","id":"c404f736-3f37-4792-bc89-67ca39137e0b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities"}}
	{"specversion":"1.0","id":"048b0f1e-155d-4715-9d9b-113dd7175e12","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20211019235229-6808 in cluster insufficient-storage-20211019235229-6808","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"5f8f460f-a111-422a-a110-95c0ece5afb3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"dea4a6d6-ffe5-42e6-b007-d2591e3a27d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"2494d4cb-fb36-4027-bdde-d8d610da906b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity)","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:77: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20211019235229-6808 --output=json --layout=cluster
status_test.go:77: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20211019235229-6808 --output=json --layout=cluster: exit status 7 (366.740712ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20211019235229-6808","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.23.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20211019235229-6808","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1019 23:52:36.503230  116553 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20211019235229-6808" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/kubeconfig

                                                
                                                
** /stderr **
status_test.go:77: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20211019235229-6808 --output=json --layout=cluster
status_test.go:77: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20211019235229-6808 --output=json --layout=cluster: exit status 7 (331.943625ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20211019235229-6808","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.23.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20211019235229-6808","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1019 23:52:36.835997  116655 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20211019235229-6808" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/kubeconfig
	E1019 23:52:36.847234  116655 status.go:557] unable to read event log: stat: stat /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/insufficient-storage-20211019235229-6808/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-20211019235229-6808" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-20211019235229-6808
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-20211019235229-6808: (6.281519493s)
--- PASS: TestInsufficientStorage (13.37s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (103.06s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  /tmp/minikube-v1.16.0.727173495.exe start -p running-upgrade-20211019235358-6808 --memory=2200 --vm-driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Done: /tmp/minikube-v1.16.0.727173495.exe start -p running-upgrade-20211019235358-6808 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (36.317850351s)
version_upgrade_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-20211019235358-6808 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-20211019235358-6808 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m2.992308438s)
helpers_test.go:175: Cleaning up "running-upgrade-20211019235358-6808" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-20211019235358-6808

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-20211019235358-6808: (3.322398563s)
--- PASS: TestRunningBinaryUpgrade (103.06s)

                                                
                                    
x
+
TestKubernetesUpgrade (194.63s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:225: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20211019235444-6808 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:225: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20211019235444-6808 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (54.731305449s)
version_upgrade_test.go:230: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-20211019235444-6808

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:230: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-20211019235444-6808: (21.076939426s)
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-20211019235444-6808 status --format={{.Host}}
version_upgrade_test.go:235: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-20211019235444-6808 status --format={{.Host}}: exit status 7 (93.907101ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:237: status error: exit status 7 (may be ok)
version_upgrade_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20211019235444-6808 --memory=2200 --kubernetes-version=v1.22.3-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:246: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20211019235444-6808 --memory=2200 --kubernetes-version=v1.22.3-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m3.797743672s)
version_upgrade_test.go:251: (dbg) Run:  kubectl --context kubernetes-upgrade-20211019235444-6808 version --output=json
version_upgrade_test.go:270: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:272: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20211019235444-6808 --memory=2200 --kubernetes-version=v1.14.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:272: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-20211019235444-6808 --memory=2200 --kubernetes-version=v1.14.0 --driver=docker  --container-runtime=containerd: exit status 106 (110.596568ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20211019235444-6808] minikube v1.23.2 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube
	  - MINIKUBE_LOCATION=12739
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.22.3-rc.0 cluster to v1.14.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.14.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-20211019235444-6808
	    minikube start -p kubernetes-upgrade-20211019235444-6808 --kubernetes-version=v1.14.0
	    
	    2) Create a second cluster with Kubernetes 1.14.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20211019235444-68082 --kubernetes-version=v1.14.0
	    
	    3) Use the existing cluster at version Kubernetes 1.22.3-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20211019235444-6808 --kubernetes-version=v1.22.3-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:276: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:278: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20211019235444-6808 --memory=2200 --kubernetes-version=v1.22.3-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:278: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20211019235444-6808 --memory=2200 --kubernetes-version=v1.22.3-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (44.303063215s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-20211019235444-6808" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-20211019235444-6808
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-20211019235444-6808: (10.444705748s)
--- PASS: TestKubernetesUpgrade (194.63s)

                                                
                                    
x
+
TestMissingContainerUpgrade (168.37s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:312: (dbg) Run:  /tmp/minikube-v1.9.1.3495384586.exe start -p missing-upgrade-20211019235420-6808 --memory=2200 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:312: (dbg) Done: /tmp/minikube-v1.9.1.3495384586.exe start -p missing-upgrade-20211019235420-6808 --memory=2200 --driver=docker  --container-runtime=containerd: (1m32.641187066s)
version_upgrade_test.go:321: (dbg) Run:  docker stop missing-upgrade-20211019235420-6808

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:321: (dbg) Done: docker stop missing-upgrade-20211019235420-6808: (12.047720846s)
version_upgrade_test.go:326: (dbg) Run:  docker rm missing-upgrade-20211019235420-6808
version_upgrade_test.go:332: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-20211019235420-6808 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:332: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-20211019235420-6808 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m0.103661386s)
helpers_test.go:175: Cleaning up "missing-upgrade-20211019235420-6808" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-20211019235420-6808
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-20211019235420-6808: (3.12177647s)
--- PASS: TestMissingContainerUpgrade (168.37s)

                                                
                                    
x
+
TestPause/serial/Start (78.74s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:78: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20211019235243-6808 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:78: (dbg) Done: out/minikube-linux-amd64 start -p pause-20211019235243-6808 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m18.735357696s)
--- PASS: TestPause/serial/Start (78.74s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (120.47s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:186: (dbg) Run:  /tmp/minikube-v1.16.0.2680112011.exe start -p stopped-upgrade-20211019235243-6808 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E1019 23:53:35.042115    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/functional-20211019232952-6808/client.crt: no such file or directory
version_upgrade_test.go:186: (dbg) Done: /tmp/minikube-v1.16.0.2680112011.exe start -p stopped-upgrade-20211019235243-6808 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (54.312590204s)
version_upgrade_test.go:195: (dbg) Run:  /tmp/minikube-v1.16.0.2680112011.exe -p stopped-upgrade-20211019235243-6808 stop
version_upgrade_test.go:195: (dbg) Done: /tmp/minikube-v1.16.0.2680112011.exe -p stopped-upgrade-20211019235243-6808 stop: (1.387479331s)
version_upgrade_test.go:201: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-20211019235243-6808 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:201: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-20211019235243-6808 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m4.769335882s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (120.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (0.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:214: (dbg) Run:  out/minikube-linux-amd64 start -p false-20211019235355-6808 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:214: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-20211019235355-6808 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (240.91752ms)

                                                
                                                
-- stdout --
	* [false-20211019235355-6808] minikube v1.23.2 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube
	  - MINIKUBE_LOCATION=12739
	* Using the docker driver based on user configuration
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 23:53:55.340853  129868 out.go:297] Setting OutFile to fd 1 ...
	I1019 23:53:55.340930  129868 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1019 23:53:55.340934  129868 out.go:310] Setting ErrFile to fd 2...
	I1019 23:53:55.340938  129868 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1019 23:53:55.341044  129868 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/bin
	I1019 23:53:55.341314  129868 out.go:304] Setting JSON to false
	I1019 23:53:55.377070  129868 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":5797,"bootTime":1634681838,"procs":271,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I1019 23:53:55.377168  129868 start.go:121] virtualization: kvm guest
	I1019 23:53:55.379779  129868 out.go:176] * [false-20211019235355-6808] minikube v1.23.2 on Debian 9.13 (kvm/amd64)
	I1019 23:53:55.379935  129868 notify.go:169] Checking for updates...
	I1019 23:53:55.381509  129868 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/kubeconfig
	I1019 23:53:55.382905  129868 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 23:53:55.384208  129868 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube
	I1019 23:53:55.385447  129868 out.go:176]   - MINIKUBE_LOCATION=12739
	I1019 23:53:55.386524  129868 config.go:176] Loaded profile config "offline-containerd-20211019235243-6808": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.2
	I1019 23:53:55.386685  129868 config.go:176] Loaded profile config "pause-20211019235243-6808": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.2
	I1019 23:53:55.386881  129868 config.go:176] Loaded profile config "stopped-upgrade-20211019235243-6808": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I1019 23:53:55.386922  129868 driver.go:343] Setting default libvirt URI to qemu:///system
	I1019 23:53:55.436060  129868 docker.go:132] docker version: linux-19.03.15
	I1019 23:53:55.436166  129868 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1019 23:53:55.524397  129868 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:188 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:true NGoroutines:57 SystemTime:2021-10-19 23:53:55.476116193 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1019 23:53:55.524495  129868 docker.go:237] overlay module found
	I1019 23:53:55.526513  129868 out.go:176] * Using the docker driver based on user configuration
	I1019 23:53:55.526535  129868 start.go:278] selected driver: docker
	I1019 23:53:55.526540  129868 start.go:751] validating driver "docker" against <nil>
	I1019 23:53:55.526556  129868 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W1019 23:53:55.526614  129868 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W1019 23:53:55.526631  129868 out.go:241] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I1019 23:53:55.527983  129868 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I1019 23:53:55.529715  129868 out.go:176] 
	W1019 23:53:55.529810  129868 out.go:241] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1019 23:53:55.531227  129868 out.go:176] 

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "false-20211019235355-6808" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-20211019235355-6808
E1019 23:53:55.846099    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/addons-20211019231643-6808/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/false (0.77s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (15.96s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:90: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20211019235243-6808 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestPause/serial/SecondStartNoReconfiguration
pause_test.go:90: (dbg) Done: out/minikube-linux-amd64 start -p pause-20211019235243-6808 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (15.942657596s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (15.96s)

                                                
                                    
x
+
TestPause/serial/Pause (0.86s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:108: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20211019235243-6808 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.86s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.49s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:77: (dbg) Run:  out/minikube-linux-amd64 status -p pause-20211019235243-6808 --output=json --layout=cluster
status_test.go:77: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-20211019235243-6808 --output=json --layout=cluster: exit status 2 (493.498178ms)

                                                
                                                
-- stdout --
	{"Name":"pause-20211019235243-6808","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 8 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.23.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20211019235243-6808","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.49s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.98s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:119: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-20211019235243-6808 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.98s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.25s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:130: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-20211019235243-6808 --alsologtostderr -v=5
pause_test.go:130: (dbg) Done: out/minikube-linux-amd64 delete -p pause-20211019235243-6808 --alsologtostderr -v=5: (3.25072029s)
--- PASS: TestPause/serial/DeletePaused (3.25s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (1.32s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:140: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:140: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.183725105s)
pause_test.go:166: (dbg) Run:  docker ps -a
pause_test.go:171: (dbg) Run:  docker volume inspect pause-20211019235243-6808
pause_test.go:171: (dbg) Non-zero exit: docker volume inspect pause-20211019235243-6808: exit status 1 (49.423854ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-20211019235243-6808

                                                
                                                
** /stderr **
pause_test.go:176: (dbg) Run:  sudo docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (1.32s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.91s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:209: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-20211019235243-6808
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (62.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p auto-20211019235354-6808 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=containerd
E1019 23:57:11.993615    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/functional-20211019232952-6808/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p auto-20211019235354-6808 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=containerd: (1m2.017864196s)
--- PASS: TestNetworkPlugins/group/auto/Start (62.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/Start (82.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p custom-weave-20211019235356-6808 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/Start
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p custom-weave-20211019235356-6808 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker  --container-runtime=containerd: (1m22.958683796s)
--- PASS: TestNetworkPlugins/group/custom-weave/Start (82.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (84.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p cilium-20211019235356-6808 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p cilium-20211019235356-6808 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=containerd: (1m24.036099929s)
--- PASS: TestNetworkPlugins/group/cilium/Start (84.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-20211019235354-6808 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context auto-20211019235354-6808 replace --force -f testdata/netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-7bfd7f67bc-sv2vb" [3c5ed46d-bb02-4a33-8510-ae993b0956d3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-7bfd7f67bc-sv2vb" [3c5ed46d-bb02-4a33-8510-ae993b0956d3] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.006677699s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:163: (dbg) Run:  kubectl --context auto-20211019235354-6808 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:182: (dbg) Run:  kubectl --context auto-20211019235354-6808 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:232: (dbg) Run:  kubectl --context auto-20211019235354-6808 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (89.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p calico-20211019235356-6808 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p calico-20211019235356-6808 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=containerd: (1m29.321372318s)
--- PASS: TestNetworkPlugins/group/calico/Start (89.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-weave-20211019235356-6808 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-weave/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/NetCatPod (11.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context custom-weave-20211019235356-6808 replace --force -f testdata/netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-7bfd7f67bc-gztdq" [e12359a4-8d0e-4d86-8cc7-4803187c50da] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-7bfd7f67bc-gztdq" [e12359a4-8d0e-4d86-8cc7-4803187c50da] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: app=netcat healthy within 11.009043519s
--- PASS: TestNetworkPlugins/group/custom-weave/NetCatPod (11.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (64.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-20211019235354-6808 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-20211019235354-6808 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m4.575232478s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (64.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:107: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:342: "cilium-kps28" [85405a55-925b-4f68-8615-fde6952ff6d6] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:107: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.015589566s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p cilium-20211019235356-6808 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (13.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context cilium-20211019235356-6808 replace --force -f testdata/netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-7bfd7f67bc-fqmqm" [2276956a-e89e-4ca1-b85a-190555543235] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/NetCatPod
helpers_test.go:342: "netcat-7bfd7f67bc-fqmqm" [2276956a-e89e-4ca1-b85a-190555543235] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 12.904607193s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (13.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (78.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-20211019235355-6808 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-20211019235355-6808 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m18.929705709s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (78.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:163: (dbg) Run:  kubectl --context cilium-20211019235356-6808 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:182: (dbg) Run:  kubectl --context cilium-20211019235356-6808 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:232: (dbg) Run:  kubectl --context cilium-20211019235356-6808 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (64.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-20211019235354-6808 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p bridge-20211019235354-6808 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=containerd: (1m4.238683265s)
--- PASS: TestNetworkPlugins/group/bridge/Start (64.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:107: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:342: "calico-node-stqjz" [ce1ea3fe-8d4c-47b9-a55c-a586163c9365] Running
net_test.go:107: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.017933576s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-20211019235356-6808 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (20.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context calico-20211019235356-6808 replace --force -f testdata/netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-7bfd7f67bc-nrctq" [fe7678c0-194c-496e-8c9d-9638c42b7654] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-7bfd7f67bc-nrctq" [fe7678c0-194c-496e-8c9d-9638c42b7654] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/NetCatPod
net_test.go:146: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 20.048404106s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (20.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-20211019235354-6808 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:163: (dbg) Run:  kubectl --context calico-20211019235356-6808 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context enable-default-cni-20211019235354-6808 replace --force -f testdata/netcat-deployment.yaml

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:146: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-7bfd7f67bc-58j5v" [577dde7a-ae12-4f82-ad2f-123c184acd0c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
helpers_test.go:342: "netcat-7bfd7f67bc-58j5v" [577dde7a-ae12-4f82-ad2f-123c184acd0c] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.006281689s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:182: (dbg) Run:  kubectl --context calico-20211019235356-6808 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:232: (dbg) Run:  kubectl --context calico-20211019235356-6808 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (111.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20211020000023-6808 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.14.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20211020000023-6808 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.14.0: (1m51.131451439s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (111.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20211019235354-6808 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:182: (dbg) Run:  kubectl --context enable-default-cni-20211019235354-6808 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:232: (dbg) Run:  kubectl --context enable-default-cni-20211019235354-6808 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (91.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20211020000033-6808 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.22.3-rc.0

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20211020000033-6808 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.22.3-rc.0: (1m31.889567699s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (91.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:107: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:342: "kindnet-q2mzw" [4841626b-a063-4084-8bb8-e1032e5a48e1] Running
net_test.go:107: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.014213721s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-20211019235355-6808 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-20211019235354-6808 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (20.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context kindnet-20211019235355-6808 replace --force -f testdata/netcat-deployment.yaml

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:146: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-7bfd7f67bc-vsjlx" [f3a0bcd3-ee75-4b37-b545-71c5af075663] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-7bfd7f67bc-vsjlx" [f3a0bcd3-ee75-4b37-b545-71c5af075663] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:146: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 20.006118606s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (20.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (21.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context bridge-20211019235354-6808 replace --force -f testdata/netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-7bfd7f67bc-p62zb" [ce987d8f-8cc2-404f-9355-fb5e79aecd5d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
helpers_test.go:342: "netcat-7bfd7f67bc-p62zb" [ce987d8f-8cc2-404f-9355-fb5e79aecd5d] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:146: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 21.009742574s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (21.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:163: (dbg) Run:  kubectl --context kindnet-20211019235355-6808 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:182: (dbg) Run:  kubectl --context kindnet-20211019235355-6808 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:232: (dbg) Run:  kubectl --context kindnet-20211019235355-6808 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Run:  kubectl --context bridge-20211019235354-6808 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:182: (dbg) Run:  kubectl --context bridge-20211019235354-6808 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:232: (dbg) Run:  kubectl --context bridge-20211019235354-6808 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)
E1020 00:08:32.545792    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/kindnet-20211019235355-6808/client.crt: no such file or directory
E1020 00:08:39.005951    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/auto-20211019235354-6808/client.crt: no such file or directory
E1020 00:08:39.039148    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/bridge-20211019235354-6808/client.crt: no such file or directory
E1020 00:08:55.846242    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/addons-20211019231643-6808/client.crt: no such file or directory
E1020 00:08:59.750099    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/custom-weave-20211019235356-6808/client.crt: no such file or directory
E1020 00:09:22.912460    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/cilium-20211019235356-6808/client.crt: no such file or directory
E1020 00:09:30.320829    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/custom-weave-20211019235356-6808/client.crt: no such file or directory
E1020 00:09:50.594266    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/cilium-20211019235356-6808/client.crt: no such file or directory
E1020 00:09:53.251044    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/calico-20211019235356-6808/client.crt: no such file or directory
E1020 00:10:15.043125    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/functional-20211019232952-6808/client.crt: no such file or directory
E1020 00:10:19.293402    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/enable-default-cni-20211019235354-6808/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (258.81s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20211020000119-6808 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.22.2

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20211020000119-6808 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.22.2: (4m18.808471547s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (258.81s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (58.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20211020000120-6808 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.22.3-rc.0
E1020 00:01:58.892070    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/addons-20211019231643-6808/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20211020000120-6808 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.22.3-rc.0: (58.348695054s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (58.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context no-preload-20211020000033-6808 create -f testdata/busybox.yaml
start_stop_delete_test.go:181: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [0f9939f0-8f64-4590-b271-10d0bf9706a7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [0f9939f0-8f64-4590-b271-10d0bf9706a7] Running
E1020 00:02:11.994275    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/functional-20211019232952-6808/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:181: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.012324754s
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context no-preload-20211020000033-6808 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context old-k8s-version-20211020000023-6808 create -f testdata/busybox.yaml

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:181: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [fb341ee7-3138-11ec-97a9-02425f59f1ac] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/DeployApp
helpers_test.go:342: "busybox" [fb341ee7-3138-11ec-97a9-02425f59f1ac] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:181: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.011453668s
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context old-k8s-version-20211020000023-6808 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.7s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-20211020000033-6808 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context no-preload-20211020000033-6808 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.70s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (20.72s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-20211020000033-6808 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:213: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-20211020000033-6808 --alsologtostderr -v=3: (20.724704528s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (20.72s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.55s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-20211020000120-6808 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:196: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.55s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (20.74s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-20211020000120-6808 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:213: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-20211020000120-6808 --alsologtostderr -v=3: (20.743060948s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (20.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-20211020000023-6808 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context old-k8s-version-20211020000023-6808 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (20.84s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-20211020000023-6808 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:213: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-20211020000023-6808 --alsologtostderr -v=3: (20.842389566s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (20.84s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20211020000033-6808 -n no-preload-20211020000033-6808
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20211020000033-6808 -n no-preload-20211020000033-6808: exit status 7 (91.389937ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-20211020000033-6808 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (327.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20211020000033-6808 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.22.3-rc.0

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20211020000033-6808 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.22.3-rc.0: (5m27.546623912s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20211020000033-6808 -n no-preload-20211020000033-6808
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (327.95s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20211020000120-6808 -n newest-cni-20211020000120-6808
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20211020000120-6808 -n newest-cni-20211020000120-6808: exit status 7 (88.826761ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-20211020000120-6808 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (36.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20211020000120-6808 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.22.3-rc.0

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20211020000120-6808 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.22.3-rc.0: (35.930471336s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20211020000120-6808 -n newest-cni-20211020000120-6808
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (36.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20211020000023-6808 -n old-k8s-version-20211020000023-6808
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20211020000023-6808 -n old-k8s-version-20211020000023-6808: exit status 7 (87.700196ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-20211020000023-6808 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (455.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20211020000023-6808 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.14.0
E1020 00:03:11.321447    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/auto-20211019235354-6808/client.crt: no such file or directory
E1020 00:03:11.326709    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/auto-20211019235354-6808/client.crt: no such file or directory
E1020 00:03:11.336938    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/auto-20211019235354-6808/client.crt: no such file or directory
E1020 00:03:11.357228    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/auto-20211019235354-6808/client.crt: no such file or directory
E1020 00:03:11.397533    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/auto-20211019235354-6808/client.crt: no such file or directory
E1020 00:03:11.477873    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/auto-20211019235354-6808/client.crt: no such file or directory
E1020 00:03:11.638278    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/auto-20211019235354-6808/client.crt: no such file or directory
E1020 00:03:11.959343    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/auto-20211019235354-6808/client.crt: no such file or directory
E1020 00:03:12.600211    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/auto-20211019235354-6808/client.crt: no such file or directory
E1020 00:03:13.881085    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/auto-20211019235354-6808/client.crt: no such file or directory
E1020 00:03:16.441796    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/auto-20211019235354-6808/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20211020000023-6808 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.14.0: (7m35.532269756s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20211020000023-6808 -n old-k8s-version-20211020000023-6808
E1020 00:10:20.937339    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/calico-20211019235356-6808/client.crt: no such file or directory
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (455.96s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:258: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:269: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-20211020000120-6808 "sudo crictl images -o json"
start_stop_delete_test.go:289: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.75s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-20211020000120-6808 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20211020000120-6808 -n newest-cni-20211020000120-6808
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20211020000120-6808 -n newest-cni-20211020000120-6808: exit status 2 (380.484016ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20211020000120-6808 -n newest-cni-20211020000120-6808
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20211020000120-6808 -n newest-cni-20211020000120-6808: exit status 2 (379.943021ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-20211020000120-6808 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20211020000120-6808 -n newest-cni-20211020000120-6808
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20211020000120-6808 -n newest-cni-20211020000120-6808
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.75s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (61.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20211020000323-6808 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.22.2
E1020 00:03:31.803017    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/auto-20211019235354-6808/client.crt: no such file or directory
E1020 00:03:52.283561    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/auto-20211019235354-6808/client.crt: no such file or directory
E1020 00:03:55.846140    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/addons-20211019231643-6808/client.crt: no such file or directory
E1020 00:03:59.750184    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/custom-weave-20211019235356-6808/client.crt: no such file or directory
E1020 00:03:59.755474    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/custom-weave-20211019235356-6808/client.crt: no such file or directory
E1020 00:03:59.765731    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/custom-weave-20211019235356-6808/client.crt: no such file or directory
E1020 00:03:59.785918    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/custom-weave-20211019235356-6808/client.crt: no such file or directory
E1020 00:03:59.826173    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/custom-weave-20211019235356-6808/client.crt: no such file or directory
E1020 00:03:59.906479    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/custom-weave-20211019235356-6808/client.crt: no such file or directory
E1020 00:04:00.066897    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/custom-weave-20211019235356-6808/client.crt: no such file or directory
E1020 00:04:00.387436    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/custom-weave-20211019235356-6808/client.crt: no such file or directory
E1020 00:04:01.028028    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/custom-weave-20211019235356-6808/client.crt: no such file or directory
E1020 00:04:02.308408    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/custom-weave-20211019235356-6808/client.crt: no such file or directory
E1020 00:04:04.869476    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/custom-weave-20211019235356-6808/client.crt: no such file or directory
E1020 00:04:11.827276    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/custom-weave-20211019235356-6808/client.crt: no such file or directory
E1020 00:04:22.068446    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/custom-weave-20211019235356-6808/client.crt: no such file or directory
E1020 00:04:22.913381    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/cilium-20211019235356-6808/client.crt: no such file or directory
E1020 00:04:22.918672    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/cilium-20211019235356-6808/client.crt: no such file or directory
E1020 00:04:22.928938    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/cilium-20211019235356-6808/client.crt: no such file or directory
E1020 00:04:22.949199    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/cilium-20211019235356-6808/client.crt: no such file or directory
E1020 00:04:22.989508    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/cilium-20211019235356-6808/client.crt: no such file or directory
E1020 00:04:23.069822    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/cilium-20211019235356-6808/client.crt: no such file or directory
E1020 00:04:23.230177    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/cilium-20211019235356-6808/client.crt: no such file or directory
E1020 00:04:23.550867    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/cilium-20211019235356-6808/client.crt: no such file or directory
E1020 00:04:24.191028    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/cilium-20211019235356-6808/client.crt: no such file or directory
start_stop_delete_test.go:171: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-different-port-20211020000323-6808 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.22.2: (1m1.494244491s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/FirstStart (61.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (9.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context default-k8s-different-port-20211020000323-6808 create -f testdata/busybox.yaml
start_stop_delete_test.go:181: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [5cf5501a-2dea-4069-8afd-cfdda77bb851] Pending
E1020 00:04:25.471186    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/cilium-20211019235356-6808/client.crt: no such file or directory
helpers_test.go:342: "busybox" [5cf5501a-2dea-4069-8afd-cfdda77bb851] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1020 00:04:28.031589    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/cilium-20211019235356-6808/client.crt: no such file or directory
helpers_test.go:342: "busybox" [5cf5501a-2dea-4069-8afd-cfdda77bb851] Running
E1020 00:04:33.151852    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/cilium-20211019235356-6808/client.crt: no such file or directory
E1020 00:04:33.244061    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/auto-20211019235354-6808/client.crt: no such file or directory
start_stop_delete_test.go:181: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: integration-test=busybox healthy within 9.012239309s
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context default-k8s-different-port-20211020000323-6808 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-different-port/serial/DeployApp (9.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.67s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-different-port-20211020000323-6808 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context default-k8s-different-port-20211020000323-6808 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.67s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (20.79s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-different-port-20211020000323-6808 --alsologtostderr -v=3
E1020 00:04:42.549466    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/custom-weave-20211019235356-6808/client.crt: no such file or directory
E1020 00:04:43.392324    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/cilium-20211019235356-6808/client.crt: no such file or directory
E1020 00:04:53.251918    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/calico-20211019235356-6808/client.crt: no such file or directory
E1020 00:04:53.257187    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/calico-20211019235356-6808/client.crt: no such file or directory
E1020 00:04:53.267421    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/calico-20211019235356-6808/client.crt: no such file or directory
E1020 00:04:53.287646    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/calico-20211019235356-6808/client.crt: no such file or directory
E1020 00:04:53.327927    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/calico-20211019235356-6808/client.crt: no such file or directory
E1020 00:04:53.408987    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/calico-20211019235356-6808/client.crt: no such file or directory
E1020 00:04:53.569395    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/calico-20211019235356-6808/client.crt: no such file or directory
E1020 00:04:53.889947    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/calico-20211019235356-6808/client.crt: no such file or directory
E1020 00:04:54.531017    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/calico-20211019235356-6808/client.crt: no such file or directory
E1020 00:04:55.812042    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/calico-20211019235356-6808/client.crt: no such file or directory
start_stop_delete_test.go:213: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-different-port-20211020000323-6808 --alsologtostderr -v=3: (20.794523332s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (20.79s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20211020000323-6808 -n default-k8s-different-port-20211020000323-6808
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20211020000323-6808 -n default-k8s-different-port-20211020000323-6808: exit status 7 (93.416658ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-different-port-20211020000323-6808 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (328.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20211020000323-6808 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.22.2
E1020 00:04:58.373190    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/calico-20211019235356-6808/client.crt: no such file or directory
E1020 00:05:03.493973    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/calico-20211019235356-6808/client.crt: no such file or directory
E1020 00:05:03.872506    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/cilium-20211019235356-6808/client.crt: no such file or directory
E1020 00:05:13.734679    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/calico-20211019235356-6808/client.crt: no such file or directory
E1020 00:05:19.292702    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/enable-default-cni-20211019235354-6808/client.crt: no such file or directory
E1020 00:05:23.510093    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/custom-weave-20211019235356-6808/client.crt: no such file or directory
E1020 00:05:24.564120    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/enable-default-cni-20211019235354-6808/client.crt: no such file or directory
E1020 00:05:24.574825    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/enable-default-cni-20211019235354-6808/client.crt: no such file or directory
E1020 00:05:24.595090    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/enable-default-cni-20211019235354-6808/client.crt: no such file or directory
E1020 00:05:24.635370    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/enable-default-cni-20211019235354-6808/client.crt: no such file or directory
E1020 00:05:24.715763    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/enable-default-cni-20211019235354-6808/client.crt: no such file or directory
E1020 00:05:24.876175    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/enable-default-cni-20211019235354-6808/client.crt: no such file or directory
E1020 00:05:25.200013    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/enable-default-cni-20211019235354-6808/client.crt: no such file or directory
E1020 00:05:25.841038    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/enable-default-cni-20211019235354-6808/client.crt: no such file or directory
E1020 00:05:27.121778    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/enable-default-cni-20211019235354-6808/client.crt: no such file or directory
E1020 00:05:29.682505    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/enable-default-cni-20211019235354-6808/client.crt: no such file or directory
E1020 00:05:34.215471    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/calico-20211019235356-6808/client.crt: no such file or directory
E1020 00:05:34.803084    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/enable-default-cni-20211019235354-6808/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-different-port-20211020000323-6808 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.22.2: (5m27.690639495s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20211020000323-6808 -n default-k8s-different-port-20211020000323-6808
--- PASS: TestStartStop/group/default-k8s-different-port/serial/SecondStart (328.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context embed-certs-20211020000119-6808 create -f testdata/busybox.yaml
start_stop_delete_test.go:181: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [c8307abc-fcba-4169-9e6b-d18671b48127] Pending
helpers_test.go:342: "busybox" [c8307abc-fcba-4169-9e6b-d18671b48127] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [c8307abc-fcba-4169-9e6b-d18671b48127] Running
E1020 00:05:44.833382    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/cilium-20211019235356-6808/client.crt: no such file or directory
E1020 00:05:45.043650    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/enable-default-cni-20211019235354-6808/client.crt: no such file or directory
start_stop_delete_test.go:181: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.011107735s
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context embed-certs-20211020000119-6808 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.66s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-20211020000119-6808 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context embed-certs-20211020000119-6808 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.66s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (20.62s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-20211020000119-6808 --alsologtostderr -v=3
E1020 00:05:48.701628    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/kindnet-20211019235355-6808/client.crt: no such file or directory
E1020 00:05:48.706882    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/kindnet-20211019235355-6808/client.crt: no such file or directory
E1020 00:05:48.717125    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/kindnet-20211019235355-6808/client.crt: no such file or directory
E1020 00:05:48.737358    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/kindnet-20211019235355-6808/client.crt: no such file or directory
E1020 00:05:48.777608    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/kindnet-20211019235355-6808/client.crt: no such file or directory
E1020 00:05:48.857928    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/kindnet-20211019235355-6808/client.crt: no such file or directory
E1020 00:05:49.018875    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/kindnet-20211019235355-6808/client.crt: no such file or directory
E1020 00:05:49.339463    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/kindnet-20211019235355-6808/client.crt: no such file or directory
E1020 00:05:49.980355    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/kindnet-20211019235355-6808/client.crt: no such file or directory
E1020 00:05:51.261253    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/kindnet-20211019235355-6808/client.crt: no such file or directory
E1020 00:05:53.821914    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/kindnet-20211019235355-6808/client.crt: no such file or directory
E1020 00:05:55.165218    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/auto-20211019235354-6808/client.crt: no such file or directory
E1020 00:05:55.196429    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/bridge-20211019235354-6808/client.crt: no such file or directory
E1020 00:05:55.201695    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/bridge-20211019235354-6808/client.crt: no such file or directory
E1020 00:05:55.211941    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/bridge-20211019235354-6808/client.crt: no such file or directory
E1020 00:05:55.232188    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/bridge-20211019235354-6808/client.crt: no such file or directory
E1020 00:05:55.272555    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/bridge-20211019235354-6808/client.crt: no such file or directory
E1020 00:05:55.352882    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/bridge-20211019235354-6808/client.crt: no such file or directory
E1020 00:05:55.513292    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/bridge-20211019235354-6808/client.crt: no such file or directory
E1020 00:05:55.833826    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/bridge-20211019235354-6808/client.crt: no such file or directory
E1020 00:05:56.474206    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/bridge-20211019235354-6808/client.crt: no such file or directory
E1020 00:05:57.754376    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/bridge-20211019235354-6808/client.crt: no such file or directory
E1020 00:05:58.943001    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/kindnet-20211019235355-6808/client.crt: no such file or directory
E1020 00:06:00.315357    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/bridge-20211019235354-6808/client.crt: no such file or directory
E1020 00:06:05.436181    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/bridge-20211019235354-6808/client.crt: no such file or directory
E1020 00:06:05.524462    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/enable-default-cni-20211019235354-6808/client.crt: no such file or directory
start_stop_delete_test.go:213: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-20211020000119-6808 --alsologtostderr -v=3: (20.618998519s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (20.62s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20211020000119-6808 -n embed-certs-20211020000119-6808
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20211020000119-6808 -n embed-certs-20211020000119-6808: exit status 7 (92.599612ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-20211020000119-6808 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
E1020 00:06:09.183759    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/kindnet-20211019235355-6808/client.crt: no such file or directory
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (330.61s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20211020000119-6808 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.22.2
E1020 00:06:15.176185    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/calico-20211019235356-6808/client.crt: no such file or directory
E1020 00:06:15.677258    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/bridge-20211019235354-6808/client.crt: no such file or directory
E1020 00:06:29.664365    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/kindnet-20211019235355-6808/client.crt: no such file or directory
E1020 00:06:36.158415    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/bridge-20211019235354-6808/client.crt: no such file or directory
E1020 00:06:46.480567    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/custom-weave-20211019235356-6808/client.crt: no such file or directory
E1020 00:06:46.484724    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/enable-default-cni-20211019235354-6808/client.crt: no such file or directory
E1020 00:07:06.753537    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/cilium-20211019235356-6808/client.crt: no such file or directory
E1020 00:07:10.625001    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/kindnet-20211019235355-6808/client.crt: no such file or directory
E1020 00:07:11.993985    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/functional-20211019232952-6808/client.crt: no such file or directory
E1020 00:07:17.118720    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/bridge-20211019235354-6808/client.crt: no such file or directory
E1020 00:07:37.097034    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/calico-20211019235356-6808/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20211020000119-6808 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.22.2: (5m30.230386472s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20211020000119-6808 -n embed-certs-20211020000119-6808
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (330.61s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-654cf69797-qs4j8" [13ba912b-f633-40a5-9b61-8a99dd75cbad] Running
E1020 00:08:08.405503    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/enable-default-cni-20211019235354-6808/client.crt: no such file or directory
start_stop_delete_test.go:259: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012332244s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-654cf69797-qs4j8" [13ba912b-f633-40a5-9b61-8a99dd75cbad] Running
E1020 00:08:11.322152    6808 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3355-65d4aecda65478a8e2ccd637d75539d3b6910207/.minikube/profiles/auto-20211019235354-6808/client.crt: no such file or directory
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006148198s
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context no-preload-20211020000033-6808 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-20211020000033-6808 "sudo crictl images -o json"
start_stop_delete_test.go:289: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:289: Found non-minikube image: library/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.96s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-20211020000033-6808 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20211020000033-6808 -n no-preload-20211020000033-6808
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20211020000033-6808 -n no-preload-20211020000033-6808: exit status 2 (404.161107ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20211020000033-6808 -n no-preload-20211020000033-6808
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20211020000033-6808 -n no-preload-20211020000033-6808: exit status 2 (404.170189ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-20211020000033-6808 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20211020000033-6808 -n no-preload-20211020000033-6808
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20211020000033-6808 -n no-preload-20211020000033-6808
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-7c5589b6d7-brbcb" [e6e507ee-3139-11ec-9318-0242c0a84c02] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013420883s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-654cf69797-mb4t5" [685c8011-574b-4ac7-a3b6-94a109bca400] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010863348s
--- PASS: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-7c5589b6d7-brbcb" [e6e507ee-3139-11ec-9318-0242c0a84c02] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006765261s
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context old-k8s-version-20211020000023-6808 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-654cf69797-mb4t5" [685c8011-574b-4ac7-a3b6-94a109bca400] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005840622s
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context default-k8s-different-port-20211020000323-6808 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-20211020000023-6808 "sudo crictl images -o json"
start_stop_delete_test.go:289: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:289: Found non-minikube image: library/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-20211020000023-6808 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20211020000023-6808 -n old-k8s-version-20211020000023-6808
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20211020000023-6808 -n old-k8s-version-20211020000023-6808: exit status 2 (378.011778ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20211020000023-6808 -n old-k8s-version-20211020000023-6808
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20211020000023-6808 -n old-k8s-version-20211020000023-6808: exit status 2 (378.395485ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-20211020000023-6808 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20211020000023-6808 -n old-k8s-version-20211020000023-6808
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20211020000023-6808 -n old-k8s-version-20211020000023-6808
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-different-port-20211020000323-6808 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:289: Found non-minikube image: library/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (3.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-different-port-20211020000323-6808 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20211020000323-6808 -n default-k8s-different-port-20211020000323-6808
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20211020000323-6808 -n default-k8s-different-port-20211020000323-6808: exit status 2 (382.956765ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20211020000323-6808 -n default-k8s-different-port-20211020000323-6808
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20211020000323-6808 -n default-k8s-different-port-20211020000323-6808: exit status 2 (397.83871ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-different-port-20211020000323-6808 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20211020000323-6808 -n default-k8s-different-port-20211020000323-6808
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20211020000323-6808 -n default-k8s-different-port-20211020000323-6808
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Pause (3.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-654cf69797-wsz4n" [cd04749f-978f-48f5-b2d1-ab0da6414cce] Running
start_stop_delete_test.go:259: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010527781s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-654cf69797-wsz4n" [cd04749f-978f-48f5-b2d1-ab0da6414cce] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005792058s
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context embed-certs-20211020000119-6808 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-20211020000119-6808 "sudo crictl images -o json"
start_stop_delete_test.go:289: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:289: Found non-minikube image: library/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.35s)

                                                
                                    

Test skip (24/269)

x
+
TestDownloadOnly/v1.14.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.14.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.14.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/kubectl
aaa_download_only_test.go:154: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.14.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.2/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.22.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.2/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.22.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.2/kubectl
aaa_download_only_test.go:154: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.22.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.3-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.3-rc.0/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.22.3-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.3-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.3-rc.0/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.22.3-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.3-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.3-rc.0/kubectl
aaa_download_only_test.go:154: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.22.3-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:36: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:187: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:401: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:491: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:35: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:39: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:43: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:43: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (0.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:89: Skipping the test as containerd container runtimes requires CNI
helpers_test.go:175: Cleaning up "kubenet-20211019235354-6808" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-20211019235354-6808
--- SKIP: TestNetworkPlugins/group/kubenet (0.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:77: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "flannel-20211019235354-6808" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p flannel-20211019235354-6808
--- SKIP: TestNetworkPlugins/group/flannel (0.53s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.69s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-20211020000120-6808" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-20211020000120-6808
--- SKIP: TestStartStop/group/disable-driver-mounts (0.69s)

                                                
                                    
Copied to clipboard