Test Report: Docker_Linux_containerd_arm64 16597

                    
                      f978965594d8c309a3fb9a5e198e88f65b92a95d:2023-05-30:29495
                    
                

Test fail (10/302)

x
+
TestAddons/parallel/Registry (196.2s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:306: registry stabilized in 41.859048ms
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-j74nb" [80ac9d6c-8eb3-46f0-b2fd-9d0e5d032568] Running
addons_test.go:308: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.014322439s
addons_test.go:311: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-d7x5f" [f4513549-bfa8-495e-8b35-eee656d4eb84] Running / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
addons_test.go:311: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.01015256s
addons_test.go:316: (dbg) Run:  kubectl --context addons-084881 delete po -l run=registry-test --now
addons_test.go:321: (dbg) Run:  kubectl --context addons-084881 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:321: (dbg) Done: kubectl --context addons-084881 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (18.461409262s)
addons_test.go:335: (dbg) Run:  out/minikube-linux-arm64 -p addons-084881 ip
2023/05/30 20:54:03 [DEBUG] GET http://192.168.49.2:5000
2023/05/30 20:54:03 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/30 20:54:03 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2023/05/30 20:54:04 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/30 20:54:04 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
addons_test.go:361: failed to check external access to http://192.168.49.2:5000: GET http://192.168.49.2:5000 giving up after 5 attempt(s): Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
addons_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p addons-084881 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-084881
helpers_test.go:235: (dbg) docker inspect addons-084881:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "859d084e65f35554263ae45fd359047de7083a1b5c7ada40731af16fcba330d8",
	        "Created": "2023-05-30T20:51:36.962467612Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2295261,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-05-30T20:51:37.28701817Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:dee4774de4f99268e16c76379a36a4607bed47635a069a7e60c17cd24d9aaa76",
	        "ResolvConfPath": "/var/lib/docker/containers/859d084e65f35554263ae45fd359047de7083a1b5c7ada40731af16fcba330d8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/859d084e65f35554263ae45fd359047de7083a1b5c7ada40731af16fcba330d8/hostname",
	        "HostsPath": "/var/lib/docker/containers/859d084e65f35554263ae45fd359047de7083a1b5c7ada40731af16fcba330d8/hosts",
	        "LogPath": "/var/lib/docker/containers/859d084e65f35554263ae45fd359047de7083a1b5c7ada40731af16fcba330d8/859d084e65f35554263ae45fd359047de7083a1b5c7ada40731af16fcba330d8-json.log",
	        "Name": "/addons-084881",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-084881:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-084881",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/8b9bff5ce4d6c35ee49462d36f9f81eedd7d30091b5ef90d5a4ee237ca4154cc-init/diff:/var/lib/docker/overlay2/e2ed5c199a0c2e09246fd5671b525fc670ce3dff10bd06ad0c2ad37b9496c295/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8b9bff5ce4d6c35ee49462d36f9f81eedd7d30091b5ef90d5a4ee237ca4154cc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8b9bff5ce4d6c35ee49462d36f9f81eedd7d30091b5ef90d5a4ee237ca4154cc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8b9bff5ce4d6c35ee49462d36f9f81eedd7d30091b5ef90d5a4ee237ca4154cc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-084881",
	                "Source": "/var/lib/docker/volumes/addons-084881/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-084881",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-084881",
	                "name.minikube.sigs.k8s.io": "addons-084881",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5b2aa69c4931017eb29496833d122325d44b5b0e2be022386fa049f9b6e6bb54",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "40946"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "40945"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "40942"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "40944"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "40943"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/5b2aa69c4931",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-084881": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "859d084e65f3",
	                        "addons-084881"
	                    ],
	                    "NetworkID": "c6f377c8d6cc1177d9b92e2a53dac44d9a269723fb79fe723bcc8795800df6da",
	                    "EndpointID": "a45f38a2f9a969cf149af0f47c5702d7de9d91e3a3aaac71dfc1590433845cce",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-084881 -n addons-084881
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-084881 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-084881 logs -n 25: (1.555331918s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-942566   | jenkins | v1.30.1 | 30 May 23 20:50 UTC |                     |
	|         | -p download-only-942566        |                        |         |         |                     |                     |
	|         | --force --alsologtostderr      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                        |         |         |                     |                     |
	|         | --container-runtime=containerd |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=containerd |                        |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-942566   | jenkins | v1.30.1 | 30 May 23 20:51 UTC |                     |
	|         | -p download-only-942566        |                        |         |         |                     |                     |
	|         | --force --alsologtostderr      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.2   |                        |         |         |                     |                     |
	|         | --container-runtime=containerd |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=containerd |                        |         |         |                     |                     |
	| delete  | --all                          | minikube               | jenkins | v1.30.1 | 30 May 23 20:51 UTC | 30 May 23 20:51 UTC |
	| delete  | -p download-only-942566        | download-only-942566   | jenkins | v1.30.1 | 30 May 23 20:51 UTC | 30 May 23 20:51 UTC |
	| delete  | -p download-only-942566        | download-only-942566   | jenkins | v1.30.1 | 30 May 23 20:51 UTC | 30 May 23 20:51 UTC |
	| start   | --download-only -p             | download-docker-074087 | jenkins | v1.30.1 | 30 May 23 20:51 UTC |                     |
	|         | download-docker-074087         |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=containerd |                        |         |         |                     |                     |
	| delete  | -p download-docker-074087      | download-docker-074087 | jenkins | v1.30.1 | 30 May 23 20:51 UTC | 30 May 23 20:51 UTC |
	| start   | --download-only -p             | binary-mirror-812172   | jenkins | v1.30.1 | 30 May 23 20:51 UTC |                     |
	|         | binary-mirror-812172           |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --binary-mirror                |                        |         |         |                     |                     |
	|         | http://127.0.0.1:42239         |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=containerd |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-812172        | binary-mirror-812172   | jenkins | v1.30.1 | 30 May 23 20:51 UTC | 30 May 23 20:51 UTC |
	| start   | -p addons-084881               | addons-084881          | jenkins | v1.30.1 | 30 May 23 20:51 UTC | 30 May 23 20:53 UTC |
	|         | --wait=true --memory=4000      |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --addons=registry              |                        |         |         |                     |                     |
	|         | --addons=metrics-server        |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                        |         |         |                     |                     |
	|         | --addons=gcp-auth              |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=containerd |                        |         |         |                     |                     |
	|         | --addons=ingress               |                        |         |         |                     |                     |
	|         | --addons=ingress-dns           |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-084881          | jenkins | v1.30.1 | 30 May 23 20:53 UTC | 30 May 23 20:53 UTC |
	|         | addons-084881                  |                        |         |         |                     |                     |
	| addons  | enable headlamp                | addons-084881          | jenkins | v1.30.1 | 30 May 23 20:53 UTC | 30 May 23 20:53 UTC |
	|         | -p addons-084881               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| ip      | addons-084881 ip               | addons-084881          | jenkins | v1.30.1 | 30 May 23 20:54 UTC | 30 May 23 20:54 UTC |
	| addons  | addons-084881 addons           | addons-084881          | jenkins | v1.30.1 | 30 May 23 20:54 UTC | 30 May 23 20:55 UTC |
	|         | disable csi-hostpath-driver    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons  | addons-084881 addons           | addons-084881          | jenkins | v1.30.1 | 30 May 23 20:55 UTC | 30 May 23 20:55 UTC |
	|         | disable volumesnapshots        |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons  | addons-084881 addons           | addons-084881          | jenkins | v1.30.1 | 30 May 23 20:55 UTC | 30 May 23 20:55 UTC |
	|         | disable metrics-server         |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p    | addons-084881          | jenkins | v1.30.1 | 30 May 23 20:55 UTC | 30 May 23 20:55 UTC |
	|         | addons-084881                  |                        |         |         |                     |                     |
	| ssh     | addons-084881 ssh curl -s      | addons-084881          | jenkins | v1.30.1 | 30 May 23 20:55 UTC | 30 May 23 20:55 UTC |
	|         | http://127.0.0.1/ -H 'Host:    |                        |         |         |                     |                     |
	|         | nginx.example.com'             |                        |         |         |                     |                     |
	| ip      | addons-084881 ip               | addons-084881          | jenkins | v1.30.1 | 30 May 23 20:55 UTC | 30 May 23 20:55 UTC |
	| addons  | addons-084881 addons disable   | addons-084881          | jenkins | v1.30.1 | 30 May 23 20:55 UTC | 30 May 23 20:55 UTC |
	|         | ingress-dns --alsologtostderr  |                        |         |         |                     |                     |
	|         | -v=1                           |                        |         |         |                     |                     |
	| addons  | addons-084881 addons disable   | addons-084881          | jenkins | v1.30.1 | 30 May 23 20:55 UTC | 30 May 23 20:55 UTC |
	|         | ingress --alsologtostderr -v=1 |                        |         |         |                     |                     |
	| addons  | addons-084881 addons disable   | addons-084881          | jenkins | v1.30.1 | 30 May 23 20:56 UTC | 30 May 23 20:56 UTC |
	|         | registry --alsologtostderr     |                        |         |         |                     |                     |
	|         | -v=1                           |                        |         |         |                     |                     |
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/05/30 20:51:14
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.20.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0530 20:51:14.525064 2294803 out.go:296] Setting OutFile to fd 1 ...
	I0530 20:51:14.525283 2294803 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 20:51:14.525325 2294803 out.go:309] Setting ErrFile to fd 2...
	I0530 20:51:14.525347 2294803 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 20:51:14.525538 2294803 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16597-2288886/.minikube/bin
	I0530 20:51:14.526022 2294803 out.go:303] Setting JSON to false
	I0530 20:51:14.527104 2294803 start.go:125] hostinfo: {"hostname":"ip-172-31-31-251","uptime":174774,"bootTime":1685305101,"procs":316,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1036-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0530 20:51:14.527199 2294803 start.go:135] virtualization:  
	I0530 20:51:14.529740 2294803 out.go:177] * [addons-084881] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	I0530 20:51:14.532207 2294803 out.go:177]   - MINIKUBE_LOCATION=16597
	I0530 20:51:14.532281 2294803 notify.go:220] Checking for updates...
	I0530 20:51:14.533771 2294803 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0530 20:51:14.536143 2294803 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16597-2288886/kubeconfig
	I0530 20:51:14.538039 2294803 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16597-2288886/.minikube
	I0530 20:51:14.539967 2294803 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0530 20:51:14.542026 2294803 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0530 20:51:14.543759 2294803 driver.go:375] Setting default libvirt URI to qemu:///system
	I0530 20:51:14.567466 2294803 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0530 20:51:14.567563 2294803 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0530 20:51:14.643315 2294803 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-05-30 20:51:14.632583997 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1036-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0530 20:51:14.643419 2294803 docker.go:294] overlay module found
	I0530 20:51:14.645632 2294803 out.go:177] * Using the docker driver based on user configuration
	I0530 20:51:14.647452 2294803 start.go:295] selected driver: docker
	I0530 20:51:14.647483 2294803 start.go:870] validating driver "docker" against <nil>
	I0530 20:51:14.647499 2294803 start.go:881] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0530 20:51:14.648138 2294803 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0530 20:51:14.709167 2294803 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-05-30 20:51:14.698969994 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1036-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0530 20:51:14.709290 2294803 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0530 20:51:14.709562 2294803 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0530 20:51:14.711572 2294803 out.go:177] * Using Docker driver with root privileges
	I0530 20:51:14.713413 2294803 cni.go:84] Creating CNI manager for ""
	I0530 20:51:14.713432 2294803 cni.go:142] "docker" driver + "containerd" runtime found, recommending kindnet
	I0530 20:51:14.713447 2294803 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0530 20:51:14.713458 2294803 start_flags.go:319] config:
	{Name:addons-084881 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:addons-084881 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containe
rd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0530 20:51:14.715885 2294803 out.go:177] * Starting control plane node addons-084881 in cluster addons-084881
	I0530 20:51:14.717788 2294803 cache.go:122] Beginning downloading kic base image for docker with containerd
	I0530 20:51:14.719692 2294803 out.go:177] * Pulling base image ...
	I0530 20:51:14.721507 2294803 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime containerd
	I0530 20:51:14.721561 2294803 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 in local docker daemon
	I0530 20:51:14.721563 2294803 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16597-2288886/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-containerd-overlay2-arm64.tar.lz4
	I0530 20:51:14.721675 2294803 cache.go:57] Caching tarball of preloaded images
	I0530 20:51:14.721760 2294803 preload.go:174] Found /home/jenkins/minikube-integration/16597-2288886/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0530 20:51:14.721772 2294803 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on containerd
	I0530 20:51:14.722120 2294803 profile.go:148] Saving config to /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/config.json ...
	I0530 20:51:14.722149 2294803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/config.json: {Name:mka9807e848cdc8a23dfc97f970cd105bb0e97be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 20:51:14.738801 2294803 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 to local cache
	I0530 20:51:14.738908 2294803 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 in local cache directory
	I0530 20:51:14.738932 2294803 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 in local cache directory, skipping pull
	I0530 20:51:14.738941 2294803 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 exists in cache, skipping pull
	I0530 20:51:14.738948 2294803 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 as a tarball
	I0530 20:51:14.738953 2294803 cache.go:163] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 from local cache
	I0530 20:51:30.132927 2294803 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 from cached tarball
	I0530 20:51:30.132979 2294803 cache.go:195] Successfully downloaded all kic artifacts
	I0530 20:51:30.133030 2294803 start.go:364] acquiring machines lock for addons-084881: {Name:mk7b1640b8054b7efbe4cbca84ab1b62233c8a44 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 20:51:30.133168 2294803 start.go:368] acquired machines lock for "addons-084881" in 116.381µs
	I0530 20:51:30.133199 2294803 start.go:93] Provisioning new machine with config: &{Name:addons-084881 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:addons-084881 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0530 20:51:30.133281 2294803 start.go:125] createHost starting for "" (driver="docker")
	I0530 20:51:30.135684 2294803 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0530 20:51:30.135949 2294803 start.go:159] libmachine.API.Create for "addons-084881" (driver="docker")
	I0530 20:51:30.135976 2294803 client.go:168] LocalClient.Create starting
	I0530 20:51:30.136120 2294803 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/16597-2288886/.minikube/certs/ca.pem
	I0530 20:51:30.569090 2294803 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/16597-2288886/.minikube/certs/cert.pem
	I0530 20:51:31.012014 2294803 cli_runner.go:164] Run: docker network inspect addons-084881 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0530 20:51:31.029942 2294803 cli_runner.go:211] docker network inspect addons-084881 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0530 20:51:31.030024 2294803 network_create.go:281] running [docker network inspect addons-084881] to gather additional debugging logs...
	I0530 20:51:31.030040 2294803 cli_runner.go:164] Run: docker network inspect addons-084881
	W0530 20:51:31.047935 2294803 cli_runner.go:211] docker network inspect addons-084881 returned with exit code 1
	I0530 20:51:31.047965 2294803 network_create.go:284] error running [docker network inspect addons-084881]: docker network inspect addons-084881: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-084881 not found
	I0530 20:51:31.047976 2294803 network_create.go:286] output of [docker network inspect addons-084881]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-084881 not found
	
	** /stderr **
	I0530 20:51:31.048056 2294803 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0530 20:51:31.068496 2294803 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40011a06e0}
	I0530 20:51:31.068533 2294803 network_create.go:123] attempt to create docker network addons-084881 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0530 20:51:31.068592 2294803 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-084881 addons-084881
	I0530 20:51:31.139525 2294803 network_create.go:107] docker network addons-084881 192.168.49.0/24 created
	I0530 20:51:31.139557 2294803 kic.go:117] calculated static IP "192.168.49.2" for the "addons-084881" container
	I0530 20:51:31.139631 2294803 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0530 20:51:31.156135 2294803 cli_runner.go:164] Run: docker volume create addons-084881 --label name.minikube.sigs.k8s.io=addons-084881 --label created_by.minikube.sigs.k8s.io=true
	I0530 20:51:31.175049 2294803 oci.go:103] Successfully created a docker volume addons-084881
	I0530 20:51:31.175136 2294803 cli_runner.go:164] Run: docker run --rm --name addons-084881-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-084881 --entrypoint /usr/bin/test -v addons-084881:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 -d /var/lib
	I0530 20:51:32.859694 2294803 cli_runner.go:217] Completed: docker run --rm --name addons-084881-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-084881 --entrypoint /usr/bin/test -v addons-084881:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 -d /var/lib: (1.684508136s)
	I0530 20:51:32.859746 2294803 oci.go:107] Successfully prepared a docker volume addons-084881
	I0530 20:51:32.859771 2294803 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime containerd
	I0530 20:51:32.859789 2294803 kic.go:190] Starting extracting preloaded images to volume ...
	I0530 20:51:32.859881 2294803 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16597-2288886/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-084881:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 -I lz4 -xf /preloaded.tar -C /extractDir
	I0530 20:51:36.882582 2294803 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16597-2288886/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-084881:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.022643992s)
	I0530 20:51:36.882615 2294803 kic.go:199] duration metric: took 4.022823 seconds to extract preloaded images to volume
	W0530 20:51:36.882752 2294803 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0530 20:51:36.882866 2294803 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0530 20:51:36.946557 2294803 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-084881 --name addons-084881 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-084881 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-084881 --network addons-084881 --ip 192.168.49.2 --volume addons-084881:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8
	I0530 20:51:37.295674 2294803 cli_runner.go:164] Run: docker container inspect addons-084881 --format={{.State.Running}}
	I0530 20:51:37.330180 2294803 cli_runner.go:164] Run: docker container inspect addons-084881 --format={{.State.Status}}
	I0530 20:51:37.353514 2294803 cli_runner.go:164] Run: docker exec addons-084881 stat /var/lib/dpkg/alternatives/iptables
	I0530 20:51:37.433202 2294803 oci.go:144] the created container "addons-084881" has a running status.
	I0530 20:51:37.433227 2294803 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16597-2288886/.minikube/machines/addons-084881/id_rsa...
	I0530 20:51:37.882047 2294803 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16597-2288886/.minikube/machines/addons-084881/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0530 20:51:37.910074 2294803 cli_runner.go:164] Run: docker container inspect addons-084881 --format={{.State.Status}}
	I0530 20:51:37.941189 2294803 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0530 20:51:37.941209 2294803 kic_runner.go:114] Args: [docker exec --privileged addons-084881 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0530 20:51:38.054620 2294803 cli_runner.go:164] Run: docker container inspect addons-084881 --format={{.State.Status}}
	I0530 20:51:38.094276 2294803 machine.go:88] provisioning docker machine ...
	I0530 20:51:38.094306 2294803 ubuntu.go:169] provisioning hostname "addons-084881"
	I0530 20:51:38.094396 2294803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-084881
	I0530 20:51:38.132475 2294803 main.go:141] libmachine: Using SSH client type: native
	I0530 20:51:38.132932 2294803 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 40946 <nil> <nil>}
	I0530 20:51:38.132944 2294803 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-084881 && echo "addons-084881" | sudo tee /etc/hostname
	I0530 20:51:38.398342 2294803 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-084881
	
	I0530 20:51:38.398486 2294803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-084881
	I0530 20:51:38.429195 2294803 main.go:141] libmachine: Using SSH client type: native
	I0530 20:51:38.429719 2294803 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 40946 <nil> <nil>}
	I0530 20:51:38.429739 2294803 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-084881' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-084881/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-084881' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0530 20:51:38.582573 2294803 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0530 20:51:38.582635 2294803 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16597-2288886/.minikube CaCertPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16597-2288886/.minikube}
	I0530 20:51:38.582675 2294803 ubuntu.go:177] setting up certificates
	I0530 20:51:38.582695 2294803 provision.go:83] configureAuth start
	I0530 20:51:38.582770 2294803 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-084881
	I0530 20:51:38.604382 2294803 provision.go:138] copyHostCerts
	I0530 20:51:38.604449 2294803 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16597-2288886/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16597-2288886/.minikube/ca.pem (1078 bytes)
	I0530 20:51:38.604559 2294803 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16597-2288886/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16597-2288886/.minikube/cert.pem (1123 bytes)
	I0530 20:51:38.604614 2294803 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16597-2288886/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16597-2288886/.minikube/key.pem (1679 bytes)
	I0530 20:51:38.604660 2294803 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16597-2288886/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16597-2288886/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16597-2288886/.minikube/certs/ca-key.pem org=jenkins.addons-084881 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-084881]
	I0530 20:51:38.852279 2294803 provision.go:172] copyRemoteCerts
	I0530 20:51:38.852348 2294803 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0530 20:51:38.852395 2294803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-084881
	I0530 20:51:38.870458 2294803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40946 SSHKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/machines/addons-084881/id_rsa Username:docker}
	I0530 20:51:38.964256 2294803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16597-2288886/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0530 20:51:38.994757 2294803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16597-2288886/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0530 20:51:39.026559 2294803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16597-2288886/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0530 20:51:39.057592 2294803 provision.go:86] duration metric: configureAuth took 474.870917ms
	I0530 20:51:39.057622 2294803 ubuntu.go:193] setting minikube options for container-runtime
	I0530 20:51:39.057822 2294803 config.go:182] Loaded profile config "addons-084881": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.2
	I0530 20:51:39.057844 2294803 machine.go:91] provisioned docker machine in 963.55083ms
	I0530 20:51:39.057850 2294803 client.go:171] LocalClient.Create took 8.921869104s
	I0530 20:51:39.057869 2294803 start.go:167] duration metric: libmachine.API.Create for "addons-084881" took 8.921921617s
	I0530 20:51:39.057879 2294803 start.go:300] post-start starting for "addons-084881" (driver="docker")
	I0530 20:51:39.057885 2294803 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0530 20:51:39.057954 2294803 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0530 20:51:39.058000 2294803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-084881
	I0530 20:51:39.076041 2294803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40946 SSHKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/machines/addons-084881/id_rsa Username:docker}
	I0530 20:51:39.172300 2294803 ssh_runner.go:195] Run: cat /etc/os-release
	I0530 20:51:39.176421 2294803 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0530 20:51:39.176457 2294803 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0530 20:51:39.176469 2294803 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0530 20:51:39.176475 2294803 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0530 20:51:39.176484 2294803 filesync.go:126] Scanning /home/jenkins/minikube-integration/16597-2288886/.minikube/addons for local assets ...
	I0530 20:51:39.176560 2294803 filesync.go:126] Scanning /home/jenkins/minikube-integration/16597-2288886/.minikube/files for local assets ...
	I0530 20:51:39.176585 2294803 start.go:303] post-start completed in 118.700692ms
	I0530 20:51:39.176899 2294803 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-084881
	I0530 20:51:39.195732 2294803 profile.go:148] Saving config to /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/config.json ...
	I0530 20:51:39.196038 2294803 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0530 20:51:39.196083 2294803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-084881
	I0530 20:51:39.219793 2294803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40946 SSHKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/machines/addons-084881/id_rsa Username:docker}
	I0530 20:51:39.315542 2294803 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0530 20:51:39.321639 2294803 start.go:128] duration metric: createHost completed in 9.188343878s
	I0530 20:51:39.321663 2294803 start.go:83] releasing machines lock for "addons-084881", held for 9.188483496s
	I0530 20:51:39.321734 2294803 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-084881
	I0530 20:51:39.339454 2294803 ssh_runner.go:195] Run: cat /version.json
	I0530 20:51:39.339472 2294803 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0530 20:51:39.339510 2294803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-084881
	I0530 20:51:39.339535 2294803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-084881
	I0530 20:51:39.362699 2294803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40946 SSHKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/machines/addons-084881/id_rsa Username:docker}
	I0530 20:51:39.364485 2294803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40946 SSHKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/machines/addons-084881/id_rsa Username:docker}
	I0530 20:51:39.592183 2294803 ssh_runner.go:195] Run: systemctl --version
	I0530 20:51:39.598240 2294803 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0530 20:51:39.604368 2294803 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0530 20:51:39.636275 2294803 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0530 20:51:39.636361 2294803 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0530 20:51:39.670961 2294803 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0530 20:51:39.670985 2294803 start.go:481] detecting cgroup driver to use...
	I0530 20:51:39.671039 2294803 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0530 20:51:39.671114 2294803 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0530 20:51:39.686257 2294803 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0530 20:51:39.700277 2294803 docker.go:193] disabling cri-docker service (if available) ...
	I0530 20:51:39.700367 2294803 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0530 20:51:39.716693 2294803 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0530 20:51:39.734038 2294803 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0530 20:51:39.834037 2294803 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0530 20:51:39.930409 2294803 docker.go:209] disabling docker service ...
	I0530 20:51:39.930498 2294803 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0530 20:51:39.953749 2294803 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0530 20:51:39.967855 2294803 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0530 20:51:40.075440 2294803 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0530 20:51:40.178794 2294803 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0530 20:51:40.193378 2294803 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0530 20:51:40.214923 2294803 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0530 20:51:40.228059 2294803 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0530 20:51:40.241473 2294803 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0530 20:51:40.241592 2294803 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0530 20:51:40.255288 2294803 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0530 20:51:40.268487 2294803 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0530 20:51:40.282029 2294803 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0530 20:51:40.295866 2294803 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0530 20:51:40.308986 2294803 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0530 20:51:40.322648 2294803 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0530 20:51:40.333336 2294803 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0530 20:51:40.343989 2294803 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0530 20:51:40.455303 2294803 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0530 20:51:40.538924 2294803 start.go:528] Will wait 60s for socket path /run/containerd/containerd.sock
	I0530 20:51:40.539011 2294803 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0530 20:51:40.544870 2294803 start.go:549] Will wait 60s for crictl version
	I0530 20:51:40.544982 2294803 ssh_runner.go:195] Run: which crictl
	I0530 20:51:40.550161 2294803 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0530 20:51:40.615902 2294803 start.go:565] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.21
	RuntimeApiVersion:  v1
	I0530 20:51:40.616058 2294803 ssh_runner.go:195] Run: containerd --version
	I0530 20:51:40.649498 2294803 ssh_runner.go:195] Run: containerd --version
	I0530 20:51:40.681971 2294803 out.go:177] * Preparing Kubernetes v1.27.2 on containerd 1.6.21 ...
	I0530 20:51:40.683833 2294803 cli_runner.go:164] Run: docker network inspect addons-084881 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0530 20:51:40.702684 2294803 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0530 20:51:40.707396 2294803 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0530 20:51:40.722856 2294803 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime containerd
	I0530 20:51:40.722927 2294803 ssh_runner.go:195] Run: sudo crictl images --output json
	I0530 20:51:40.765914 2294803 containerd.go:604] all images are preloaded for containerd runtime.
	I0530 20:51:40.765947 2294803 containerd.go:518] Images already preloaded, skipping extraction
	I0530 20:51:40.766004 2294803 ssh_runner.go:195] Run: sudo crictl images --output json
	I0530 20:51:40.809793 2294803 containerd.go:604] all images are preloaded for containerd runtime.
	I0530 20:51:40.809816 2294803 cache_images.go:84] Images are preloaded, skipping loading
	I0530 20:51:40.809873 2294803 ssh_runner.go:195] Run: sudo crictl info
	I0530 20:51:40.855825 2294803 cni.go:84] Creating CNI manager for ""
	I0530 20:51:40.855903 2294803 cni.go:142] "docker" driver + "containerd" runtime found, recommending kindnet
	I0530 20:51:40.855919 2294803 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0530 20:51:40.855939 2294803 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.27.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-084881 NodeName:addons-084881 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0530 20:51:40.856097 2294803 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-084881"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0530 20:51:40.856188 2294803 kubeadm.go:971] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=addons-084881 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.2 ClusterName:addons-084881 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0530 20:51:40.856258 2294803 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.2
	I0530 20:51:40.867908 2294803 binaries.go:44] Found k8s binaries, skipping transfer
	I0530 20:51:40.868025 2294803 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0530 20:51:40.879092 2294803 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (385 bytes)
	I0530 20:51:40.900677 2294803 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0530 20:51:40.923015 2294803 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I0530 20:51:40.944629 2294803 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0530 20:51:40.949055 2294803 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0530 20:51:40.962704 2294803 certs.go:56] Setting up /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881 for IP: 192.168.49.2
	I0530 20:51:40.962788 2294803 certs.go:190] acquiring lock for shared ca certs: {Name:mkef74d64a59002b998e67685a207d5c04604358 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 20:51:40.963533 2294803 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/16597-2288886/.minikube/ca.key
	I0530 20:51:41.200345 2294803 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16597-2288886/.minikube/ca.crt ...
	I0530 20:51:41.200377 2294803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16597-2288886/.minikube/ca.crt: {Name:mk157ef564acd0c19f3f749a6956fe0ffd6ca34f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 20:51:41.200582 2294803 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16597-2288886/.minikube/ca.key ...
	I0530 20:51:41.200597 2294803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16597-2288886/.minikube/ca.key: {Name:mkbb37766b2be36c86ce6ccb5fce2bb07e873688 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 20:51:41.200691 2294803 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/16597-2288886/.minikube/proxy-client-ca.key
	I0530 20:51:41.616153 2294803 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16597-2288886/.minikube/proxy-client-ca.crt ...
	I0530 20:51:41.616187 2294803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16597-2288886/.minikube/proxy-client-ca.crt: {Name:mkda06c826c65b89669d00ec1e9bb63cb71f4c3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 20:51:41.616920 2294803 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16597-2288886/.minikube/proxy-client-ca.key ...
	I0530 20:51:41.616939 2294803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16597-2288886/.minikube/proxy-client-ca.key: {Name:mk34835b7eccc447446ebc8782b3cd9db035a479 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 20:51:41.617695 2294803 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/client.key
	I0530 20:51:41.617716 2294803 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/client.crt with IP's: []
	I0530 20:51:42.044722 2294803 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/client.crt ...
	I0530 20:51:42.044753 2294803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/client.crt: {Name:mkfff1955f843b2ab93c9d364ec4cdb49f2a3b66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 20:51:42.044947 2294803 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/client.key ...
	I0530 20:51:42.044958 2294803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/client.key: {Name:mkd750a607e2f4270367ede56824fe275bc8738c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 20:51:42.045042 2294803 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/apiserver.key.dd3b5fb2
	I0530 20:51:42.045063 2294803 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0530 20:51:42.886407 2294803 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/apiserver.crt.dd3b5fb2 ...
	I0530 20:51:42.886441 2294803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/apiserver.crt.dd3b5fb2: {Name:mkc449d7cc41ec5cfadb7316a2c049c2c4d495ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 20:51:42.886633 2294803 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/apiserver.key.dd3b5fb2 ...
	I0530 20:51:42.886645 2294803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/apiserver.key.dd3b5fb2: {Name:mk86003e889133b0a5a37163e5c9bca6d558b6c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 20:51:42.886724 2294803 certs.go:337] copying /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/apiserver.crt
	I0530 20:51:42.886808 2294803 certs.go:341] copying /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/apiserver.key
	I0530 20:51:42.886859 2294803 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/proxy-client.key
	I0530 20:51:42.886880 2294803 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/proxy-client.crt with IP's: []
	I0530 20:51:43.271559 2294803 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/proxy-client.crt ...
	I0530 20:51:43.271595 2294803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/proxy-client.crt: {Name:mkfaac240443bbd55848e09755e20bac46b2e016 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 20:51:43.272223 2294803 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/proxy-client.key ...
	I0530 20:51:43.272241 2294803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/proxy-client.key: {Name:mk83834dd94f7978004956ac0737c2e5de724936 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 20:51:43.272505 2294803 certs.go:437] found cert: /home/jenkins/minikube-integration/16597-2288886/.minikube/certs/home/jenkins/minikube-integration/16597-2288886/.minikube/certs/ca-key.pem (1675 bytes)
	I0530 20:51:43.272553 2294803 certs.go:437] found cert: /home/jenkins/minikube-integration/16597-2288886/.minikube/certs/home/jenkins/minikube-integration/16597-2288886/.minikube/certs/ca.pem (1078 bytes)
	I0530 20:51:43.272586 2294803 certs.go:437] found cert: /home/jenkins/minikube-integration/16597-2288886/.minikube/certs/home/jenkins/minikube-integration/16597-2288886/.minikube/certs/cert.pem (1123 bytes)
	I0530 20:51:43.272614 2294803 certs.go:437] found cert: /home/jenkins/minikube-integration/16597-2288886/.minikube/certs/home/jenkins/minikube-integration/16597-2288886/.minikube/certs/key.pem (1679 bytes)
	I0530 20:51:43.273383 2294803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0530 20:51:43.303288 2294803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0530 20:51:43.334985 2294803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0530 20:51:43.363387 2294803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0530 20:51:43.391752 2294803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16597-2288886/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0530 20:51:43.420100 2294803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16597-2288886/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0530 20:51:43.450145 2294803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16597-2288886/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0530 20:51:43.479554 2294803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16597-2288886/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0530 20:51:43.508327 2294803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16597-2288886/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0530 20:51:43.539983 2294803 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0530 20:51:43.562906 2294803 ssh_runner.go:195] Run: openssl version
	I0530 20:51:43.570429 2294803 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0530 20:51:43.582784 2294803 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0530 20:51:43.587695 2294803 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 May 30 20:51 /usr/share/ca-certificates/minikubeCA.pem
	I0530 20:51:43.587777 2294803 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0530 20:51:43.596891 2294803 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0530 20:51:43.609599 2294803 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0530 20:51:43.614966 2294803 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0530 20:51:43.615043 2294803 kubeadm.go:404] StartCluster: {Name:addons-084881 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:addons-084881 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0530 20:51:43.615158 2294803 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0530 20:51:43.615264 2294803 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0530 20:51:43.659559 2294803 cri.go:88] found id: ""
	I0530 20:51:43.659665 2294803 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0530 20:51:43.671088 2294803 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0530 20:51:43.682573 2294803 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0530 20:51:43.682651 2294803 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0530 20:51:43.694366 2294803 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0530 20:51:43.694418 2294803 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0530 20:51:43.802980 2294803 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1036-aws\n", err: exit status 1
	I0530 20:51:43.881743 2294803 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0530 20:51:43.882012 2294803 kubeadm.go:322] W0530 20:51:43.881204     894 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0530 20:51:51.233796 2294803 kubeadm.go:322] W0530 20:51:51.232665     894 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0530 20:52:00.728098 2294803 kubeadm.go:322] [init] Using Kubernetes version: v1.27.2
	I0530 20:52:00.728152 2294803 kubeadm.go:322] [preflight] Running pre-flight checks
	I0530 20:52:00.728235 2294803 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0530 20:52:00.728287 2294803 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1036-aws
	I0530 20:52:00.728319 2294803 kubeadm.go:322] OS: Linux
	I0530 20:52:00.728364 2294803 kubeadm.go:322] CGROUPS_CPU: enabled
	I0530 20:52:00.728409 2294803 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0530 20:52:00.728454 2294803 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0530 20:52:00.728499 2294803 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0530 20:52:00.728545 2294803 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0530 20:52:00.728593 2294803 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0530 20:52:00.728636 2294803 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0530 20:52:00.728682 2294803 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0530 20:52:00.728726 2294803 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0530 20:52:00.728794 2294803 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0530 20:52:00.728883 2294803 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0530 20:52:00.728969 2294803 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0530 20:52:00.729029 2294803 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0530 20:52:00.731215 2294803 out.go:204]   - Generating certificates and keys ...
	I0530 20:52:00.731403 2294803 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0530 20:52:00.731513 2294803 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0530 20:52:00.731586 2294803 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0530 20:52:00.731666 2294803 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0530 20:52:00.731727 2294803 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0530 20:52:00.731778 2294803 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0530 20:52:00.731831 2294803 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0530 20:52:00.731948 2294803 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-084881 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0530 20:52:00.732001 2294803 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0530 20:52:00.732117 2294803 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-084881 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0530 20:52:00.732182 2294803 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0530 20:52:00.732247 2294803 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0530 20:52:00.732294 2294803 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0530 20:52:00.732351 2294803 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0530 20:52:00.732404 2294803 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0530 20:52:00.732458 2294803 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0530 20:52:00.732522 2294803 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0530 20:52:00.732577 2294803 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0530 20:52:00.732681 2294803 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0530 20:52:00.732766 2294803 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0530 20:52:00.732806 2294803 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0530 20:52:00.732873 2294803 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0530 20:52:00.735381 2294803 out.go:204]   - Booting up control plane ...
	I0530 20:52:00.735484 2294803 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0530 20:52:00.735557 2294803 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0530 20:52:00.735620 2294803 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0530 20:52:00.735696 2294803 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0530 20:52:00.735841 2294803 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0530 20:52:00.735912 2294803 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002052 seconds
	I0530 20:52:00.736012 2294803 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0530 20:52:00.736128 2294803 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0530 20:52:00.736183 2294803 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0530 20:52:00.736352 2294803 kubeadm.go:322] [mark-control-plane] Marking the node addons-084881 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0530 20:52:00.736404 2294803 kubeadm.go:322] [bootstrap-token] Using token: m1twdw.2d8g3p9wgkp7ep6a
	I0530 20:52:00.738507 2294803 out.go:204]   - Configuring RBAC rules ...
	I0530 20:52:00.738735 2294803 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0530 20:52:00.738863 2294803 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0530 20:52:00.739064 2294803 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0530 20:52:00.739232 2294803 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0530 20:52:00.739383 2294803 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0530 20:52:00.739540 2294803 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0530 20:52:00.739702 2294803 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0530 20:52:00.739776 2294803 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0530 20:52:00.739857 2294803 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0530 20:52:00.739894 2294803 kubeadm.go:322] 
	I0530 20:52:00.739983 2294803 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0530 20:52:00.740005 2294803 kubeadm.go:322] 
	I0530 20:52:00.740134 2294803 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0530 20:52:00.740171 2294803 kubeadm.go:322] 
	I0530 20:52:00.740212 2294803 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0530 20:52:00.740313 2294803 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0530 20:52:00.740403 2294803 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0530 20:52:00.740428 2294803 kubeadm.go:322] 
	I0530 20:52:00.740521 2294803 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0530 20:52:00.740529 2294803 kubeadm.go:322] 
	I0530 20:52:00.740591 2294803 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0530 20:52:00.740596 2294803 kubeadm.go:322] 
	I0530 20:52:00.740649 2294803 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0530 20:52:00.740725 2294803 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0530 20:52:00.740794 2294803 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0530 20:52:00.740798 2294803 kubeadm.go:322] 
	I0530 20:52:00.740883 2294803 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0530 20:52:00.740960 2294803 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0530 20:52:00.740965 2294803 kubeadm.go:322] 
	I0530 20:52:00.741052 2294803 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token m1twdw.2d8g3p9wgkp7ep6a \
	I0530 20:52:00.741156 2294803 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ff077636a4006c51f7456795481b97b5286c2b636cefd4a65a893c56dd417d66 \
	I0530 20:52:00.741177 2294803 kubeadm.go:322] 	--control-plane 
	I0530 20:52:00.741181 2294803 kubeadm.go:322] 
	I0530 20:52:00.741266 2294803 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0530 20:52:00.741270 2294803 kubeadm.go:322] 
	I0530 20:52:00.741392 2294803 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token m1twdw.2d8g3p9wgkp7ep6a \
	I0530 20:52:00.741515 2294803 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ff077636a4006c51f7456795481b97b5286c2b636cefd4a65a893c56dd417d66 
	I0530 20:52:00.741524 2294803 cni.go:84] Creating CNI manager for ""
	I0530 20:52:00.741531 2294803 cni.go:142] "docker" driver + "containerd" runtime found, recommending kindnet
	I0530 20:52:00.743765 2294803 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0530 20:52:00.745475 2294803 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0530 20:52:00.752632 2294803 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.27.2/kubectl ...
	I0530 20:52:00.752649 2294803 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0530 20:52:00.803697 2294803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0530 20:52:01.745358 2294803 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0530 20:52:01.745494 2294803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 20:52:01.745571 2294803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=6d0d5d534b34391ed9438fcde26494d33a798fae minikube.k8s.io/name=addons-084881 minikube.k8s.io/updated_at=2023_05_30T20_52_01_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 20:52:01.895549 2294803 ops.go:34] apiserver oom_adj: -16
	I0530 20:52:01.895642 2294803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 20:52:02.551470 2294803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 20:52:03.051013 2294803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 20:52:03.551640 2294803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 20:52:04.051575 2294803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 20:52:04.551213 2294803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 20:52:05.050967 2294803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 20:52:05.551501 2294803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 20:52:06.050989 2294803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 20:52:06.551904 2294803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 20:52:07.051083 2294803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 20:52:07.551854 2294803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 20:52:08.050936 2294803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 20:52:08.551055 2294803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 20:52:09.051944 2294803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 20:52:09.551486 2294803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 20:52:10.051711 2294803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 20:52:10.551802 2294803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 20:52:11.051165 2294803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 20:52:11.551059 2294803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 20:52:12.051087 2294803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 20:52:12.551834 2294803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 20:52:13.050958 2294803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 20:52:13.298792 2294803 kubeadm.go:1076] duration metric: took 11.553347185s to wait for elevateKubeSystemPrivileges.
	I0530 20:52:13.298821 2294803 kubeadm.go:406] StartCluster complete in 29.683781643s
	I0530 20:52:13.298839 2294803 settings.go:142] acquiring lock: {Name:mkdbeb66ef6240a2ca39c4b606ba49055796e4d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 20:52:13.298959 2294803 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16597-2288886/kubeconfig
	I0530 20:52:13.299332 2294803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16597-2288886/kubeconfig: {Name:mk0fdfd8357f1362eedcc9930d50aa3f3a348d95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 20:52:13.301965 2294803 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0530 20:52:13.301979 2294803 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0530 20:52:13.302060 2294803 addons.go:66] Setting volumesnapshots=true in profile "addons-084881"
	I0530 20:52:13.302074 2294803 addons.go:228] Setting addon volumesnapshots=true in "addons-084881"
	I0530 20:52:13.302113 2294803 host.go:66] Checking if "addons-084881" exists ...
	I0530 20:52:13.302124 2294803 addons.go:66] Setting gcp-auth=true in profile "addons-084881"
	I0530 20:52:13.302148 2294803 mustload.go:65] Loading cluster: addons-084881
	I0530 20:52:13.302371 2294803 config.go:182] Loaded profile config "addons-084881": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.2
	I0530 20:52:13.302583 2294803 cli_runner.go:164] Run: docker container inspect addons-084881 --format={{.State.Status}}
	I0530 20:52:13.302631 2294803 cli_runner.go:164] Run: docker container inspect addons-084881 --format={{.State.Status}}
	I0530 20:52:13.302849 2294803 config.go:182] Loaded profile config "addons-084881": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.2
	I0530 20:52:13.302880 2294803 addons.go:66] Setting cloud-spanner=true in profile "addons-084881"
	I0530 20:52:13.302889 2294803 addons.go:228] Setting addon cloud-spanner=true in "addons-084881"
	I0530 20:52:13.302920 2294803 host.go:66] Checking if "addons-084881" exists ...
	I0530 20:52:13.303335 2294803 cli_runner.go:164] Run: docker container inspect addons-084881 --format={{.State.Status}}
	I0530 20:52:13.303430 2294803 addons.go:66] Setting csi-hostpath-driver=true in profile "addons-084881"
	I0530 20:52:13.303456 2294803 addons.go:228] Setting addon csi-hostpath-driver=true in "addons-084881"
	I0530 20:52:13.303485 2294803 host.go:66] Checking if "addons-084881" exists ...
	I0530 20:52:13.303924 2294803 cli_runner.go:164] Run: docker container inspect addons-084881 --format={{.State.Status}}
	I0530 20:52:13.303996 2294803 addons.go:66] Setting default-storageclass=true in profile "addons-084881"
	I0530 20:52:13.304014 2294803 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-084881"
	I0530 20:52:13.304264 2294803 cli_runner.go:164] Run: docker container inspect addons-084881 --format={{.State.Status}}
	I0530 20:52:13.304842 2294803 addons.go:66] Setting inspektor-gadget=true in profile "addons-084881"
	I0530 20:52:13.304860 2294803 addons.go:228] Setting addon inspektor-gadget=true in "addons-084881"
	I0530 20:52:13.304899 2294803 host.go:66] Checking if "addons-084881" exists ...
	I0530 20:52:13.305449 2294803 cli_runner.go:164] Run: docker container inspect addons-084881 --format={{.State.Status}}
	I0530 20:52:13.305604 2294803 addons.go:66] Setting ingress=true in profile "addons-084881"
	I0530 20:52:13.305646 2294803 addons.go:228] Setting addon ingress=true in "addons-084881"
	I0530 20:52:13.305736 2294803 host.go:66] Checking if "addons-084881" exists ...
	I0530 20:52:13.306281 2294803 cli_runner.go:164] Run: docker container inspect addons-084881 --format={{.State.Status}}
	I0530 20:52:13.306459 2294803 addons.go:66] Setting ingress-dns=true in profile "addons-084881"
	I0530 20:52:13.306516 2294803 addons.go:228] Setting addon ingress-dns=true in "addons-084881"
	I0530 20:52:13.306602 2294803 host.go:66] Checking if "addons-084881" exists ...
	I0530 20:52:13.307159 2294803 cli_runner.go:164] Run: docker container inspect addons-084881 --format={{.State.Status}}
	I0530 20:52:13.307422 2294803 addons.go:66] Setting registry=true in profile "addons-084881"
	I0530 20:52:13.307468 2294803 addons.go:228] Setting addon registry=true in "addons-084881"
	I0530 20:52:13.307529 2294803 host.go:66] Checking if "addons-084881" exists ...
	I0530 20:52:13.308116 2294803 cli_runner.go:164] Run: docker container inspect addons-084881 --format={{.State.Status}}
	I0530 20:52:13.308265 2294803 addons.go:66] Setting metrics-server=true in profile "addons-084881"
	I0530 20:52:13.308314 2294803 addons.go:228] Setting addon metrics-server=true in "addons-084881"
	I0530 20:52:13.308386 2294803 host.go:66] Checking if "addons-084881" exists ...
	I0530 20:52:13.308619 2294803 addons.go:66] Setting storage-provisioner=true in profile "addons-084881"
	I0530 20:52:13.308641 2294803 addons.go:228] Setting addon storage-provisioner=true in "addons-084881"
	I0530 20:52:13.308675 2294803 host.go:66] Checking if "addons-084881" exists ...
	I0530 20:52:13.325888 2294803 cli_runner.go:164] Run: docker container inspect addons-084881 --format={{.State.Status}}
	I0530 20:52:13.350734 2294803 cli_runner.go:164] Run: docker container inspect addons-084881 --format={{.State.Status}}
	I0530 20:52:13.480651 2294803 host.go:66] Checking if "addons-084881" exists ...
	I0530 20:52:13.504247 2294803 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.5
	I0530 20:52:13.533814 2294803 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0530 20:52:13.533909 2294803 addons.go:420] installing /etc/kubernetes/addons/deployment.yaml
	I0530 20:52:13.541030 2294803 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0530 20:52:13.541047 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0530 20:52:13.541048 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1003 bytes)
	I0530 20:52:13.541117 2294803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-084881
	I0530 20:52:13.541125 2294803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-084881
	I0530 20:52:13.550974 2294803 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0530 20:52:13.556748 2294803 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0530 20:52:13.556774 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0530 20:52:13.556842 2294803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-084881
	I0530 20:52:13.578962 2294803 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0530 20:52:13.581177 2294803 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0530 20:52:13.584959 2294803 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0530 20:52:13.586659 2294803 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0530 20:52:13.593295 2294803 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0530 20:52:13.595113 2294803 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.3
	I0530 20:52:13.596806 2294803 addons.go:420] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0530 20:52:13.596832 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0530 20:52:13.596907 2294803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-084881
	I0530 20:52:13.601428 2294803 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0530 20:52:13.603514 2294803 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0530 20:52:13.609732 2294803 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0530 20:52:13.611564 2294803 addons.go:420] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0530 20:52:13.611594 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0530 20:52:13.611688 2294803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-084881
	I0530 20:52:13.621884 2294803 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.16.1
	I0530 20:52:13.624307 2294803 addons.go:420] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0530 20:52:13.624334 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0530 20:52:13.624417 2294803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-084881
	I0530 20:52:13.679339 2294803 out.go:177]   - Using image gcr.io/google_containers/kube-registry-proxy:0.4
	I0530 20:52:13.681556 2294803 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0530 20:52:13.687142 2294803 addons.go:420] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0530 20:52:13.687171 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0530 20:52:13.687243 2294803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-084881
	I0530 20:52:13.697535 2294803 out.go:177]   - Using image docker.io/registry:2.8.1
	I0530 20:52:13.699773 2294803 addons.go:420] installing /etc/kubernetes/addons/registry-rc.yaml
	I0530 20:52:13.699803 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0530 20:52:13.699887 2294803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-084881
	I0530 20:52:13.701021 2294803 addons.go:228] Setting addon default-storageclass=true in "addons-084881"
	I0530 20:52:13.701063 2294803 host.go:66] Checking if "addons-084881" exists ...
	I0530 20:52:13.701617 2294803 cli_runner.go:164] Run: docker container inspect addons-084881 --format={{.State.Status}}
	I0530 20:52:13.706273 2294803 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230312-helm-chart-4.5.2-28-g66a760794
	I0530 20:52:13.711964 2294803 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.7.0
	I0530 20:52:13.713894 2294803 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230312-helm-chart-4.5.2-28-g66a760794
	I0530 20:52:13.716112 2294803 addons.go:420] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0530 20:52:13.716144 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16145 bytes)
	I0530 20:52:13.716231 2294803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-084881
	I0530 20:52:13.773489 2294803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40946 SSHKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/machines/addons-084881/id_rsa Username:docker}
	I0530 20:52:13.812268 2294803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40946 SSHKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/machines/addons-084881/id_rsa Username:docker}
	I0530 20:52:13.853412 2294803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40946 SSHKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/machines/addons-084881/id_rsa Username:docker}
	I0530 20:52:13.893411 2294803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40946 SSHKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/machines/addons-084881/id_rsa Username:docker}
	I0530 20:52:13.894797 2294803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40946 SSHKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/machines/addons-084881/id_rsa Username:docker}
	I0530 20:52:13.900678 2294803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40946 SSHKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/machines/addons-084881/id_rsa Username:docker}
	I0530 20:52:13.911893 2294803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40946 SSHKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/machines/addons-084881/id_rsa Username:docker}
	I0530 20:52:13.937409 2294803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40946 SSHKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/machines/addons-084881/id_rsa Username:docker}
	I0530 20:52:13.957420 2294803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40946 SSHKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/machines/addons-084881/id_rsa Username:docker}
	I0530 20:52:13.962557 2294803 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0530 20:52:13.962577 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0530 20:52:13.962641 2294803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-084881
	I0530 20:52:13.991885 2294803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40946 SSHKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/machines/addons-084881/id_rsa Username:docker}
	I0530 20:52:14.152433 2294803 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-084881" context rescaled to 1 replicas
	I0530 20:52:14.152475 2294803 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0530 20:52:14.155465 2294803 out.go:177] * Verifying Kubernetes components...
	I0530 20:52:14.158105 2294803 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0530 20:52:14.216696 2294803 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0530 20:52:14.472188 2294803 node_ready.go:35] waiting up to 6m0s for node "addons-084881" to be "Ready" ...
	I0530 20:52:14.476262 2294803 node_ready.go:49] node "addons-084881" has status "Ready":"True"
	I0530 20:52:14.476290 2294803 node_ready.go:38] duration metric: took 4.072056ms waiting for node "addons-084881" to be "Ready" ...
	I0530 20:52:14.476301 2294803 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0530 20:52:14.490748 2294803 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-ksg2p" in "kube-system" namespace to be "Ready" ...
	I0530 20:52:14.603994 2294803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0530 20:52:14.607091 2294803 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0530 20:52:14.607113 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0530 20:52:14.687620 2294803 addons.go:420] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0530 20:52:14.687684 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0530 20:52:14.699006 2294803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0530 20:52:14.733985 2294803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0530 20:52:14.739726 2294803 addons.go:420] installing /etc/kubernetes/addons/registry-svc.yaml
	I0530 20:52:14.739800 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0530 20:52:14.746958 2294803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0530 20:52:14.751108 2294803 addons.go:420] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0530 20:52:14.751199 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0530 20:52:14.981349 2294803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0530 20:52:14.986488 2294803 addons.go:420] installing /etc/kubernetes/addons/ig-role.yaml
	I0530 20:52:14.986513 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0530 20:52:14.999104 2294803 addons.go:420] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0530 20:52:14.999134 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0530 20:52:15.055916 2294803 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0530 20:52:15.055945 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0530 20:52:15.065021 2294803 addons.go:420] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0530 20:52:15.065047 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (950 bytes)
	I0530 20:52:15.086353 2294803 addons.go:420] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0530 20:52:15.086380 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0530 20:52:15.253477 2294803 addons.go:420] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0530 20:52:15.253501 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0530 20:52:15.258264 2294803 addons.go:420] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0530 20:52:15.258289 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0530 20:52:15.321921 2294803 addons.go:420] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0530 20:52:15.321947 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0530 20:52:15.325536 2294803 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0530 20:52:15.325560 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0530 20:52:15.340794 2294803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0530 20:52:15.402473 2294803 addons.go:420] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0530 20:52:15.402498 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0530 20:52:15.440782 2294803 addons.go:420] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0530 20:52:15.440805 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0530 20:52:15.592976 2294803 addons.go:420] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0530 20:52:15.593001 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0530 20:52:15.607532 2294803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0530 20:52:15.619506 2294803 addons.go:420] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0530 20:52:15.619539 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0530 20:52:15.622530 2294803 addons.go:420] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0530 20:52:15.622554 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0530 20:52:15.743458 2294803 addons.go:420] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0530 20:52:15.743565 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0530 20:52:15.807427 2294803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0530 20:52:15.812093 2294803 addons.go:420] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0530 20:52:15.812118 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0530 20:52:16.063888 2294803 addons.go:420] installing /etc/kubernetes/addons/ig-crd.yaml
	I0530 20:52:16.063916 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0530 20:52:16.088264 2294803 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0530 20:52:16.088289 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0530 20:52:16.242528 2294803 addons.go:420] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0530 20:52:16.242553 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I0530 20:52:16.330106 2294803 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0530 20:52:16.330190 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0530 20:52:16.400648 2294803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0530 20:52:16.528965 2294803 pod_ready.go:102] pod "coredns-5d78c9869d-ksg2p" in "kube-system" namespace has status "Ready":"False"
	I0530 20:52:16.573169 2294803 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0530 20:52:16.573242 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0530 20:52:16.714853 2294803 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0530 20:52:16.714880 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0530 20:52:16.741349 2294803 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.524579689s)
	I0530 20:52:16.741427 2294803 start.go:916] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0530 20:52:16.891807 2294803 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0530 20:52:16.891877 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0530 20:52:17.113926 2294803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0530 20:52:17.789091 2294803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.185063182s)
	I0530 20:52:17.789151 2294803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.090081613s)
	I0530 20:52:17.789177 2294803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.05512654s)
	I0530 20:52:19.056120 2294803 pod_ready.go:102] pod "coredns-5d78c9869d-ksg2p" in "kube-system" namespace has status "Ready":"False"
	I0530 20:52:20.311620 2294803 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0530 20:52:20.311744 2294803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-084881
	I0530 20:52:20.342912 2294803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40946 SSHKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/machines/addons-084881/id_rsa Username:docker}
	I0530 20:52:20.644972 2294803 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0530 20:52:20.686332 2294803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.939281446s)
	I0530 20:52:20.686413 2294803 addons.go:464] Verifying addon ingress=true in "addons-084881"
	I0530 20:52:20.688702 2294803 out.go:177] * Verifying ingress addon...
	I0530 20:52:20.686775 2294803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.705271159s)
	I0530 20:52:20.686819 2294803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.345997789s)
	I0530 20:52:20.686892 2294803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.079332528s)
	I0530 20:52:20.686988 2294803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.87953021s)
	I0530 20:52:20.687055 2294803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.286316477s)
	I0530 20:52:20.688892 2294803 addons.go:464] Verifying addon registry=true in "addons-084881"
	I0530 20:52:20.688901 2294803 addons.go:464] Verifying addon metrics-server=true in "addons-084881"
	W0530 20:52:20.689029 2294803 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0530 20:52:20.692471 2294803 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0530 20:52:20.694127 2294803 out.go:177] * Verifying registry addon...
	I0530 20:52:20.696965 2294803 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0530 20:52:20.694213 2294803 retry.go:31] will retry after 322.526505ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0530 20:52:20.699113 2294803 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0530 20:52:20.699133 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:20.704447 2294803 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0530 20:52:20.704468 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:20.758037 2294803 addons.go:228] Setting addon gcp-auth=true in "addons-084881"
	I0530 20:52:20.758136 2294803 host.go:66] Checking if "addons-084881" exists ...
	I0530 20:52:20.758682 2294803 cli_runner.go:164] Run: docker container inspect addons-084881 --format={{.State.Status}}
	I0530 20:52:20.788210 2294803 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0530 20:52:20.788261 2294803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-084881
	I0530 20:52:20.818410 2294803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40946 SSHKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/machines/addons-084881/id_rsa Username:docker}
	I0530 20:52:21.019915 2294803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0530 20:52:21.204270 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:21.210275 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:21.554030 2294803 pod_ready.go:102] pod "coredns-5d78c9869d-ksg2p" in "kube-system" namespace has status "Ready":"False"
	I0530 20:52:21.706308 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:21.717424 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:22.203848 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:22.211648 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:22.774301 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:22.774978 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:22.907344 2294803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.793367003s)
	I0530 20:52:22.907390 2294803 addons.go:464] Verifying addon csi-hostpath-driver=true in "addons-084881"
	I0530 20:52:22.910180 2294803 out.go:177] * Verifying csi-hostpath-driver addon...
	I0530 20:52:22.907763 2294803 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.119440628s)
	I0530 20:52:22.914978 2294803 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230312-helm-chart-4.5.2-28-g66a760794
	I0530 20:52:22.913624 2294803 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0530 20:52:22.919337 2294803 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0530 20:52:22.921364 2294803 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0530 20:52:22.921388 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0530 20:52:22.935202 2294803 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0530 20:52:22.935235 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:23.000476 2294803 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0530 20:52:23.000515 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0530 20:52:23.189501 2294803 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0530 20:52:23.189528 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5474 bytes)
	I0530 20:52:23.204055 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:23.209762 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:23.219635 2294803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.199659782s)
	I0530 20:52:23.236919 2294803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0530 20:52:23.441809 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:23.704485 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:23.709800 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:23.942934 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:24.051781 2294803 pod_ready.go:102] pod "coredns-5d78c9869d-ksg2p" in "kube-system" namespace has status "Ready":"False"
	I0530 20:52:24.108969 2294803 addons.go:464] Verifying addon gcp-auth=true in "addons-084881"
	I0530 20:52:24.111686 2294803 out.go:177] * Verifying gcp-auth addon...
	I0530 20:52:24.120401 2294803 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0530 20:52:24.130460 2294803 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0530 20:52:24.130522 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:24.204392 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:24.210525 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:24.442756 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:24.634493 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:24.704936 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:24.710548 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:24.942849 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:25.136636 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:25.204244 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:25.209979 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:25.441823 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:25.634654 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:25.705098 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:25.710350 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:25.941877 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:26.135420 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:26.207597 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:26.213516 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:26.442226 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:26.517208 2294803 pod_ready.go:102] pod "coredns-5d78c9869d-ksg2p" in "kube-system" namespace has status "Ready":"False"
	I0530 20:52:26.635963 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:26.705367 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:26.711133 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:26.941743 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:27.134749 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:27.216209 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:27.225860 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:27.442129 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:27.515860 2294803 pod_ready.go:92] pod "coredns-5d78c9869d-ksg2p" in "kube-system" namespace has status "Ready":"True"
	I0530 20:52:27.515886 2294803 pod_ready.go:81] duration metric: took 13.025106059s waiting for pod "coredns-5d78c9869d-ksg2p" in "kube-system" namespace to be "Ready" ...
	I0530 20:52:27.515897 2294803 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-mrcpc" in "kube-system" namespace to be "Ready" ...
	I0530 20:52:27.518444 2294803 pod_ready.go:97] error getting pod "coredns-5d78c9869d-mrcpc" in "kube-system" namespace (skipping!): pods "coredns-5d78c9869d-mrcpc" not found
	I0530 20:52:27.518472 2294803 pod_ready.go:81] duration metric: took 2.568692ms waiting for pod "coredns-5d78c9869d-mrcpc" in "kube-system" namespace to be "Ready" ...
	E0530 20:52:27.518483 2294803 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5d78c9869d-mrcpc" in "kube-system" namespace (skipping!): pods "coredns-5d78c9869d-mrcpc" not found
	I0530 20:52:27.518493 2294803 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-084881" in "kube-system" namespace to be "Ready" ...
	I0530 20:52:27.524999 2294803 pod_ready.go:92] pod "etcd-addons-084881" in "kube-system" namespace has status "Ready":"True"
	I0530 20:52:27.525024 2294803 pod_ready.go:81] duration metric: took 6.523751ms waiting for pod "etcd-addons-084881" in "kube-system" namespace to be "Ready" ...
	I0530 20:52:27.525040 2294803 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-084881" in "kube-system" namespace to be "Ready" ...
	I0530 20:52:27.532456 2294803 pod_ready.go:92] pod "kube-apiserver-addons-084881" in "kube-system" namespace has status "Ready":"True"
	I0530 20:52:27.532479 2294803 pod_ready.go:81] duration metric: took 7.431773ms waiting for pod "kube-apiserver-addons-084881" in "kube-system" namespace to be "Ready" ...
	I0530 20:52:27.532489 2294803 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-084881" in "kube-system" namespace to be "Ready" ...
	I0530 20:52:27.540934 2294803 pod_ready.go:92] pod "kube-controller-manager-addons-084881" in "kube-system" namespace has status "Ready":"True"
	I0530 20:52:27.540960 2294803 pod_ready.go:81] duration metric: took 8.460555ms waiting for pod "kube-controller-manager-addons-084881" in "kube-system" namespace to be "Ready" ...
	I0530 20:52:27.540973 2294803 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-427l8" in "kube-system" namespace to be "Ready" ...
	I0530 20:52:27.634226 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:27.703899 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:27.709811 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:27.712834 2294803 pod_ready.go:92] pod "kube-proxy-427l8" in "kube-system" namespace has status "Ready":"True"
	I0530 20:52:27.712856 2294803 pod_ready.go:81] duration metric: took 171.876453ms waiting for pod "kube-proxy-427l8" in "kube-system" namespace to be "Ready" ...
	I0530 20:52:27.712867 2294803 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-084881" in "kube-system" namespace to be "Ready" ...
	I0530 20:52:27.943829 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:28.113980 2294803 pod_ready.go:92] pod "kube-scheduler-addons-084881" in "kube-system" namespace has status "Ready":"True"
	I0530 20:52:28.114002 2294803 pod_ready.go:81] duration metric: took 401.128341ms waiting for pod "kube-scheduler-addons-084881" in "kube-system" namespace to be "Ready" ...
	I0530 20:52:28.114012 2294803 pod_ready.go:38] duration metric: took 13.637701546s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0530 20:52:28.114048 2294803 api_server.go:52] waiting for apiserver process to appear ...
	I0530 20:52:28.114119 2294803 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0530 20:52:28.135654 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:28.138141 2294803 api_server.go:72] duration metric: took 13.985613931s to wait for apiserver process to appear ...
	I0530 20:52:28.138166 2294803 api_server.go:88] waiting for apiserver healthz status ...
	I0530 20:52:28.138186 2294803 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0530 20:52:28.147439 2294803 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0530 20:52:28.149078 2294803 api_server.go:141] control plane version: v1.27.2
	I0530 20:52:28.149138 2294803 api_server.go:131] duration metric: took 10.964755ms to wait for apiserver health ...
	I0530 20:52:28.149161 2294803 system_pods.go:43] waiting for kube-system pods to appear ...
	I0530 20:52:28.204853 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:28.209526 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:28.321292 2294803 system_pods.go:59] 17 kube-system pods found
	I0530 20:52:28.321496 2294803 system_pods.go:61] "coredns-5d78c9869d-ksg2p" [f9a8cda2-9fe0-4a90-b6f2-942e2dcd3627] Running
	I0530 20:52:28.321506 2294803 system_pods.go:61] "csi-hostpath-attacher-0" [30f584e7-8b32-45c8-acd4-d376356a7976] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0530 20:52:28.321517 2294803 system_pods.go:61] "csi-hostpath-resizer-0" [6f3ba8c9-8d8d-4d9f-b9c4-cde2c7580e06] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0530 20:52:28.321527 2294803 system_pods.go:61] "csi-hostpathplugin-rlhv5" [f31d867d-34d9-4778-9c25-f97506b185c4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0530 20:52:28.321539 2294803 system_pods.go:61] "etcd-addons-084881" [b5859865-ca58-40b0-b6bf-bcb22c22680b] Running
	I0530 20:52:28.321545 2294803 system_pods.go:61] "kindnet-rfjr4" [d446764b-ed16-41f6-b467-9265e9e62df5] Running
	I0530 20:52:28.321553 2294803 system_pods.go:61] "kube-apiserver-addons-084881" [ab71c754-cafc-494d-ae44-6a3fecd7f1dc] Running
	I0530 20:52:28.321559 2294803 system_pods.go:61] "kube-controller-manager-addons-084881" [30678b6d-fc50-4927-9eec-4dff4fcd73c6] Running
	I0530 20:52:28.321565 2294803 system_pods.go:61] "kube-ingress-dns-minikube" [2cc16a5d-5d03-4b00-bbf5-84737deefcd5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0530 20:52:28.321575 2294803 system_pods.go:61] "kube-proxy-427l8" [c25d4286-4618-4eca-bc9d-da963349be52] Running
	I0530 20:52:28.321580 2294803 system_pods.go:61] "kube-scheduler-addons-084881" [65663488-1806-4ad2-81a2-bedccf0bde50] Running
	I0530 20:52:28.321587 2294803 system_pods.go:61] "metrics-server-844d8db974-l29tb" [2f3ce57c-fd89-420e-863b-e5b166ccdb49] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0530 20:52:28.321599 2294803 system_pods.go:61] "registry-j74nb" [80ac9d6c-8eb3-46f0-b2fd-9d0e5d032568] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0530 20:52:28.321607 2294803 system_pods.go:61] "registry-proxy-d7x5f" [f4513549-bfa8-495e-8b35-eee656d4eb84] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0530 20:52:28.321617 2294803 system_pods.go:61] "snapshot-controller-75bbb956b9-7w6s9" [2fe5b3f5-6c7e-49e0-acf2-4d2af6038490] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0530 20:52:28.321625 2294803 system_pods.go:61] "snapshot-controller-75bbb956b9-f5zmq" [df4c40b4-4a0e-49c0-b7dc-6038c24eee2a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0530 20:52:28.321631 2294803 system_pods.go:61] "storage-provisioner" [b9a2f28a-53c6-47f5-9a9c-44bfb2616c7c] Running
	I0530 20:52:28.321636 2294803 system_pods.go:74] duration metric: took 172.459169ms to wait for pod list to return data ...
	I0530 20:52:28.321644 2294803 default_sa.go:34] waiting for default service account to be created ...
	I0530 20:52:28.441262 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:28.513685 2294803 default_sa.go:45] found service account: "default"
	I0530 20:52:28.513708 2294803 default_sa.go:55] duration metric: took 192.05909ms for default service account to be created ...
	I0530 20:52:28.513719 2294803 system_pods.go:116] waiting for k8s-apps to be running ...
	I0530 20:52:28.634602 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:28.704293 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:28.709898 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:28.719835 2294803 system_pods.go:86] 17 kube-system pods found
	I0530 20:52:28.719915 2294803 system_pods.go:89] "coredns-5d78c9869d-ksg2p" [f9a8cda2-9fe0-4a90-b6f2-942e2dcd3627] Running
	I0530 20:52:28.719934 2294803 system_pods.go:89] "csi-hostpath-attacher-0" [30f584e7-8b32-45c8-acd4-d376356a7976] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0530 20:52:28.719944 2294803 system_pods.go:89] "csi-hostpath-resizer-0" [6f3ba8c9-8d8d-4d9f-b9c4-cde2c7580e06] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0530 20:52:28.719956 2294803 system_pods.go:89] "csi-hostpathplugin-rlhv5" [f31d867d-34d9-4778-9c25-f97506b185c4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0530 20:52:28.719966 2294803 system_pods.go:89] "etcd-addons-084881" [b5859865-ca58-40b0-b6bf-bcb22c22680b] Running
	I0530 20:52:28.719974 2294803 system_pods.go:89] "kindnet-rfjr4" [d446764b-ed16-41f6-b467-9265e9e62df5] Running
	I0530 20:52:28.719982 2294803 system_pods.go:89] "kube-apiserver-addons-084881" [ab71c754-cafc-494d-ae44-6a3fecd7f1dc] Running
	I0530 20:52:28.719988 2294803 system_pods.go:89] "kube-controller-manager-addons-084881" [30678b6d-fc50-4927-9eec-4dff4fcd73c6] Running
	I0530 20:52:28.720003 2294803 system_pods.go:89] "kube-ingress-dns-minikube" [2cc16a5d-5d03-4b00-bbf5-84737deefcd5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0530 20:52:28.720011 2294803 system_pods.go:89] "kube-proxy-427l8" [c25d4286-4618-4eca-bc9d-da963349be52] Running
	I0530 20:52:28.720019 2294803 system_pods.go:89] "kube-scheduler-addons-084881" [65663488-1806-4ad2-81a2-bedccf0bde50] Running
	I0530 20:52:28.720027 2294803 system_pods.go:89] "metrics-server-844d8db974-l29tb" [2f3ce57c-fd89-420e-863b-e5b166ccdb49] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0530 20:52:28.720037 2294803 system_pods.go:89] "registry-j74nb" [80ac9d6c-8eb3-46f0-b2fd-9d0e5d032568] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0530 20:52:28.720045 2294803 system_pods.go:89] "registry-proxy-d7x5f" [f4513549-bfa8-495e-8b35-eee656d4eb84] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0530 20:52:28.720053 2294803 system_pods.go:89] "snapshot-controller-75bbb956b9-7w6s9" [2fe5b3f5-6c7e-49e0-acf2-4d2af6038490] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0530 20:52:28.720063 2294803 system_pods.go:89] "snapshot-controller-75bbb956b9-f5zmq" [df4c40b4-4a0e-49c0-b7dc-6038c24eee2a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0530 20:52:28.720069 2294803 system_pods.go:89] "storage-provisioner" [b9a2f28a-53c6-47f5-9a9c-44bfb2616c7c] Running
	I0530 20:52:28.720082 2294803 system_pods.go:126] duration metric: took 206.35743ms to wait for k8s-apps to be running ...
	I0530 20:52:28.720091 2294803 system_svc.go:44] waiting for kubelet service to be running ....
	I0530 20:52:28.720151 2294803 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0530 20:52:28.736989 2294803 system_svc.go:56] duration metric: took 16.889046ms WaitForService to wait for kubelet.
	I0530 20:52:28.737018 2294803 kubeadm.go:581] duration metric: took 14.584496765s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0530 20:52:28.737051 2294803 node_conditions.go:102] verifying NodePressure condition ...
	I0530 20:52:28.915820 2294803 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0530 20:52:28.915852 2294803 node_conditions.go:123] node cpu capacity is 2
	I0530 20:52:28.915865 2294803 node_conditions.go:105] duration metric: took 178.807955ms to run NodePressure ...
	I0530 20:52:28.915903 2294803 start.go:228] waiting for startup goroutines ...
	I0530 20:52:28.942254 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:29.135112 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:29.204878 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:29.209810 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:29.442942 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:29.635238 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:29.703764 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:29.709647 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:29.941996 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:30.134857 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:30.213434 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:30.214677 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:30.441435 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:30.634483 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:30.704864 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:30.710186 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:30.941399 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:31.134593 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:31.203946 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:31.209524 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:31.441387 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:31.634217 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:31.704283 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:31.713197 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:31.940608 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:32.134640 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:32.203811 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:32.209459 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:32.441090 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:32.638501 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:32.704693 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:32.709680 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:32.942494 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:33.138331 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:33.216512 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:33.223692 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:33.443214 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:33.636219 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:33.707355 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:33.718727 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:33.941967 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:34.135199 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:34.206206 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:34.216166 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:34.442755 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:34.635312 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:34.706201 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:34.712471 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:34.946808 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:35.137658 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:35.207679 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:35.215256 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:35.447643 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:35.634787 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:35.705071 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:35.713443 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:35.942928 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:36.135246 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:36.204714 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:36.214091 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:36.442007 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:36.635301 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:36.704040 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:36.711409 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:36.940839 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:37.134713 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:37.204965 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:37.210018 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:37.440832 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:37.634480 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:37.715779 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:37.723035 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:37.946849 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:38.135137 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:38.204697 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:38.209516 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:38.441632 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:38.635292 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:38.704508 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:38.718055 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:38.942034 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:39.137024 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:39.205191 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:39.211041 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:39.443632 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:39.636671 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:39.711213 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:39.713592 2294803 kapi.go:107] duration metric: took 19.016627311s to wait for kubernetes.io/minikube-addons=registry ...
	I0530 20:52:39.945383 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:40.143760 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:40.204933 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:40.442556 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:40.635699 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:40.705568 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:40.942674 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:41.135196 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:41.203833 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:41.442519 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:41.639997 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:41.705138 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:41.941899 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:42.135097 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:42.204046 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:42.443114 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:42.635028 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:42.703948 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:42.941745 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:43.139013 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:43.226375 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:43.442112 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:43.647259 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:43.704062 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:43.941357 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:44.134514 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:44.206809 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:44.442550 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:44.635160 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:44.704482 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:44.942459 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:45.134450 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:45.208464 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:45.441916 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:45.635259 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:45.704325 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:45.941357 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:46.135636 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:46.216545 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:46.442339 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:46.635169 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:46.706531 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:46.956358 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:47.138559 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:47.205653 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:47.442493 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:47.634640 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:47.704158 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:47.941490 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:48.134406 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:48.204758 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:48.442541 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:48.635463 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:48.703998 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:48.941580 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:49.134750 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:49.204314 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:49.450461 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:49.635123 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:49.704559 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:49.941583 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:50.134635 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:50.204194 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:50.441124 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:50.635092 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:50.704236 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:50.941694 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:51.134689 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:51.204049 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:51.445156 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:51.634522 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:51.704277 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:51.942133 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:52.135010 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:52.204244 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:52.441967 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:52.634710 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:52.704444 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:52.941852 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:53.135494 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:53.205690 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:53.441805 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:53.636528 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:53.704422 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:53.942071 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:54.134678 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:54.204342 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:54.442353 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:54.635093 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:54.704801 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:54.959343 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:55.134869 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:55.204628 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:55.441899 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:55.634180 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:55.706037 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:55.943385 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:56.134172 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:56.203867 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:56.442122 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:56.636160 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:56.705263 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:56.941444 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:57.135330 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:57.204133 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:57.441117 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:57.635409 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:57.706443 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:57.940915 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:58.136473 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:58.204446 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:58.442453 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:58.636613 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:58.707573 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:58.942493 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:59.134153 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:59.203954 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:59.478057 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:59.638866 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:59.708861 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:59.942989 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:53:00.142610 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:00.208773 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:00.442178 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:53:00.635208 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:00.703982 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:00.941272 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:53:01.135463 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:01.204644 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:01.442205 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:53:01.636203 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:01.706287 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:01.942049 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:53:02.134978 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:02.207212 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:02.440991 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:53:02.634857 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:02.703946 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:02.942350 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:53:03.134712 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:03.206219 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:03.441883 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:53:03.634952 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:03.704863 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:03.941780 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:53:04.134786 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:04.204048 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:04.441399 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:53:04.634785 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:04.704536 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:04.941422 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:53:05.136002 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:05.204589 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:05.442468 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:53:05.634907 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:05.705367 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:05.941212 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:53:06.135249 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:06.204766 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:06.441885 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:53:06.634384 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:06.713795 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:06.942333 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:53:07.134741 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:07.205383 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:07.441957 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:53:07.635379 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:07.704767 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:07.941588 2294803 kapi.go:107] duration metric: took 45.027960536s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0530 20:53:08.135071 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:08.204501 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:08.634163 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:08.704607 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:09.134268 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:09.204390 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:09.634650 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:09.703766 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:10.135548 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:10.204688 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:10.634759 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:10.703975 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:11.135409 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:11.204481 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:11.634771 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:11.704601 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:12.134083 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:12.203479 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:12.634464 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:12.703995 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:13.135057 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:13.203684 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:13.634621 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:13.703989 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:14.135164 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:14.203630 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:14.635283 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:14.704282 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:15.134868 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:15.203819 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:15.635042 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:15.705048 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:16.134972 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:16.204280 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:16.635079 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:16.704367 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:17.135754 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:17.204735 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:17.635146 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:17.713606 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:18.134805 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:18.204305 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:18.634154 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:18.704160 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:19.134369 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:19.204538 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:19.634730 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:19.704151 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:20.135132 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:20.204172 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:20.634166 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:20.703890 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:21.134633 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:21.203793 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:21.634902 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:21.704647 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:22.134130 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:22.204421 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:22.634470 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:22.705723 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:23.134575 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:23.203813 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:23.635003 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:23.704150 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:24.134621 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:24.204448 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:24.634906 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:24.704591 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:25.134666 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:25.204050 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:25.634036 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:25.703939 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:26.134711 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:26.203642 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:26.634645 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:26.704208 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:27.134238 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:27.204090 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:27.634117 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:27.704158 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:28.135090 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:28.203786 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:28.634924 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:28.704837 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:29.135329 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:29.204038 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:29.635541 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:29.704769 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:30.135543 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:30.205240 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:30.634242 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:30.704457 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:31.135736 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:31.204205 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:31.642746 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:31.707193 2294803 kapi.go:107] duration metric: took 1m11.014719858s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0530 20:53:32.135606 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:32.635583 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:33.135551 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:33.634718 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:34.134462 2294803 kapi.go:107] duration metric: took 1m10.01406186s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0530 20:53:34.136636 2294803 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-084881 cluster.
	I0530 20:53:34.138602 2294803 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0530 20:53:34.140250 2294803 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0530 20:53:34.142226 2294803 out.go:177] * Enabled addons: storage-provisioner, cloud-spanner, default-storageclass, ingress-dns, inspektor-gadget, metrics-server, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0530 20:53:34.144163 2294803 addons.go:499] enable addons completed in 1m20.842182702s: enabled=[storage-provisioner cloud-spanner default-storageclass ingress-dns inspektor-gadget metrics-server volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0530 20:53:34.144216 2294803 start.go:233] waiting for cluster config update ...
	I0530 20:53:34.144237 2294803 start.go:242] writing updated cluster config ...
	I0530 20:53:34.144570 2294803 ssh_runner.go:195] Run: rm -f paused
	I0530 20:53:34.544197 2294803 start.go:568] kubectl: 1.27.2, cluster: 1.27.2 (minor skew: 0)
	I0530 20:53:34.546471 2294803 out.go:177] * Done! kubectl is now configured to use "addons-084881" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	00016db86b900       13753a81eccfd       36 seconds ago       Exited              hello-world-app           3                   22c1f8b9c956e       hello-world-app-65bdb79f98-44ph7
	7bf43ab6b9484       5ee47dcca7543       About a minute ago   Running             nginx                     0                   ee38b42c0afa0       nginx
	a77450474f31d       d23bd5d730ccb       2 minutes ago        Running             headlamp                  0                   4c487c1039183       headlamp-6b5756787-jnx5p
	c8c7332bcef1c       2a5f29343eb03       3 minutes ago        Running             gcp-auth                  0                   209039fefd2d4       gcp-auth-58478865f7-bmjjs
	5c5b68bdceed1       97e04611ad434       4 minutes ago        Running             coredns                   0                   3ca0513984547       coredns-5d78c9869d-ksg2p
	3d719535e72b7       ba04bb24b9575       4 minutes ago        Running             storage-provisioner       0                   f9983f0245a01       storage-provisioner
	af06deacbe464       b18bf71b941ba       4 minutes ago        Running             kindnet-cni               0                   b46bb2e7a2e33       kindnet-rfjr4
	eb52e16f0bf4f       29921a0845422       4 minutes ago        Running             kube-proxy                0                   c8bdc3b60665e       kube-proxy-427l8
	082669e89f69c       305d7ed1dae28       4 minutes ago        Running             kube-scheduler            0                   fe6d9b8c5d54e       kube-scheduler-addons-084881
	1c58be7ab9b9a       2ee705380c3c5       4 minutes ago        Running             kube-controller-manager   0                   1c5bc2fdf710f       kube-controller-manager-addons-084881
	8ec23ffd6cce0       72c9df6be7f1b       4 minutes ago        Running             kube-apiserver            0                   ace0f0469b5be       kube-apiserver-addons-084881
	613a3ce5b6219       24bc64e911039       4 minutes ago        Running             etcd                      0                   3cd4b1a2114b7       etcd-addons-084881
	
	* 
	* ==> containerd <==
	* May 30 20:56:47 addons-084881 containerd[739]: time="2023-05-30T20:56:47.914115467Z" level=warning msg="cleaning up after shim disconnected" id=806f0099e7aac62515a01eaf49267d41d0c5ef89073ed48408af356df78cf5de namespace=k8s.io
	May 30 20:56:47 addons-084881 containerd[739]: time="2023-05-30T20:56:47.914219015Z" level=info msg="cleaning up dead shim"
	May 30 20:56:47 addons-084881 containerd[739]: time="2023-05-30T20:56:47.929622643Z" level=info msg="StopPodSandbox for \"643557ef2450ea5d3719d50cdfdb3d53da7782183e077162df72b11b3d37c920\""
	May 30 20:56:47 addons-084881 containerd[739]: time="2023-05-30T20:56:47.930254802Z" level=info msg="Container to stop \"551bea2df637f3ba8622e71aa5eb85655dadba6002dc82d53fc39131a5935468\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	May 30 20:56:47 addons-084881 containerd[739]: time="2023-05-30T20:56:47.967648135Z" level=warning msg="cleanup warnings time=\"2023-05-30T20:56:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=10317 runtime=io.containerd.runc.v2\n"
	May 30 20:56:47 addons-084881 containerd[739]: time="2023-05-30T20:56:47.974906060Z" level=info msg="StopContainer for \"806f0099e7aac62515a01eaf49267d41d0c5ef89073ed48408af356df78cf5de\" returns successfully"
	May 30 20:56:47 addons-084881 containerd[739]: time="2023-05-30T20:56:47.975641988Z" level=info msg="StopPodSandbox for \"4df08001bde43e111d43f450bf80320c8109af12ca227355771fb33cb77d20e5\""
	May 30 20:56:47 addons-084881 containerd[739]: time="2023-05-30T20:56:47.975828686Z" level=info msg="Container to stop \"806f0099e7aac62515a01eaf49267d41d0c5ef89073ed48408af356df78cf5de\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	May 30 20:56:48 addons-084881 containerd[739]: time="2023-05-30T20:56:48.053730986Z" level=info msg="shim disconnected" id=643557ef2450ea5d3719d50cdfdb3d53da7782183e077162df72b11b3d37c920
	May 30 20:56:48 addons-084881 containerd[739]: time="2023-05-30T20:56:48.055009843Z" level=warning msg="cleaning up after shim disconnected" id=643557ef2450ea5d3719d50cdfdb3d53da7782183e077162df72b11b3d37c920 namespace=k8s.io
	May 30 20:56:48 addons-084881 containerd[739]: time="2023-05-30T20:56:48.055171105Z" level=info msg="cleaning up dead shim"
	May 30 20:56:48 addons-084881 containerd[739]: time="2023-05-30T20:56:48.078748334Z" level=info msg="shim disconnected" id=4df08001bde43e111d43f450bf80320c8109af12ca227355771fb33cb77d20e5
	May 30 20:56:48 addons-084881 containerd[739]: time="2023-05-30T20:56:48.084272989Z" level=warning msg="cleaning up after shim disconnected" id=4df08001bde43e111d43f450bf80320c8109af12ca227355771fb33cb77d20e5 namespace=k8s.io
	May 30 20:56:48 addons-084881 containerd[739]: time="2023-05-30T20:56:48.084475810Z" level=info msg="cleaning up dead shim"
	May 30 20:56:48 addons-084881 containerd[739]: time="2023-05-30T20:56:48.112090822Z" level=warning msg="cleanup warnings time=\"2023-05-30T20:56:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=10378 runtime=io.containerd.runc.v2\n"
	May 30 20:56:48 addons-084881 containerd[739]: time="2023-05-30T20:56:48.118800977Z" level=warning msg="cleanup warnings time=\"2023-05-30T20:56:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=10367 runtime=io.containerd.runc.v2\n"
	May 30 20:56:48 addons-084881 containerd[739]: time="2023-05-30T20:56:48.150292231Z" level=info msg="TearDown network for sandbox \"4df08001bde43e111d43f450bf80320c8109af12ca227355771fb33cb77d20e5\" successfully"
	May 30 20:56:48 addons-084881 containerd[739]: time="2023-05-30T20:56:48.150490416Z" level=info msg="StopPodSandbox for \"4df08001bde43e111d43f450bf80320c8109af12ca227355771fb33cb77d20e5\" returns successfully"
	May 30 20:56:48 addons-084881 containerd[739]: time="2023-05-30T20:56:48.214199491Z" level=info msg="TearDown network for sandbox \"643557ef2450ea5d3719d50cdfdb3d53da7782183e077162df72b11b3d37c920\" successfully"
	May 30 20:56:48 addons-084881 containerd[739]: time="2023-05-30T20:56:48.214453790Z" level=info msg="StopPodSandbox for \"643557ef2450ea5d3719d50cdfdb3d53da7782183e077162df72b11b3d37c920\" returns successfully"
	May 30 20:56:48 addons-084881 containerd[739]: time="2023-05-30T20:56:48.260199972Z" level=info msg="RemoveContainer for \"551bea2df637f3ba8622e71aa5eb85655dadba6002dc82d53fc39131a5935468\""
	May 30 20:56:48 addons-084881 containerd[739]: time="2023-05-30T20:56:48.269441765Z" level=info msg="RemoveContainer for \"551bea2df637f3ba8622e71aa5eb85655dadba6002dc82d53fc39131a5935468\" returns successfully"
	May 30 20:56:48 addons-084881 containerd[739]: time="2023-05-30T20:56:48.272350910Z" level=info msg="RemoveContainer for \"806f0099e7aac62515a01eaf49267d41d0c5ef89073ed48408af356df78cf5de\""
	May 30 20:56:48 addons-084881 containerd[739]: time="2023-05-30T20:56:48.280870436Z" level=info msg="RemoveContainer for \"806f0099e7aac62515a01eaf49267d41d0c5ef89073ed48408af356df78cf5de\" returns successfully"
	May 30 20:56:48 addons-084881 containerd[739]: time="2023-05-30T20:56:48.284426648Z" level=error msg="ContainerStatus for \"806f0099e7aac62515a01eaf49267d41d0c5ef89073ed48408af356df78cf5de\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"806f0099e7aac62515a01eaf49267d41d0c5ef89073ed48408af356df78cf5de\": not found"
	
	* 
	* ==> coredns [5c5b68bdceed16eefa52edb2364ef317c0a66325b87172adf4ac8c66243c3c54] <==
	* [INFO] 10.244.0.16:43421 - 25818 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000089542s
	[INFO] 10.244.0.16:57513 - 57338 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002253274s
	[INFO] 10.244.0.16:43421 - 40194 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001919705s
	[INFO] 10.244.0.16:57513 - 39324 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002461952s
	[INFO] 10.244.0.16:43421 - 42269 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001572442s
	[INFO] 10.244.0.16:57513 - 38592 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000990261s
	[INFO] 10.244.0.16:43421 - 1348 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000111409s
	[INFO] 10.244.0.16:49458 - 242 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000117865s
	[INFO] 10.244.0.16:49441 - 54533 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000040837s
	[INFO] 10.244.0.16:49441 - 30268 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000053456s
	[INFO] 10.244.0.16:49458 - 33152 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000033977s
	[INFO] 10.244.0.16:49441 - 19840 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000062276s
	[INFO] 10.244.0.16:49458 - 20433 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000035872s
	[INFO] 10.244.0.16:49458 - 59127 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000043757s
	[INFO] 10.244.0.16:49441 - 6181 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000036053s
	[INFO] 10.244.0.16:49458 - 24142 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000045194s
	[INFO] 10.244.0.16:49441 - 21192 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000027003s
	[INFO] 10.244.0.16:49458 - 63777 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000048943s
	[INFO] 10.244.0.16:49441 - 52230 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000030761s
	[INFO] 10.244.0.16:49458 - 54212 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001382241s
	[INFO] 10.244.0.16:49441 - 48146 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001158875s
	[INFO] 10.244.0.16:49458 - 25488 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001087457s
	[INFO] 10.244.0.16:49458 - 40952 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000160302s
	[INFO] 10.244.0.16:49441 - 30588 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000960164s
	[INFO] 10.244.0.16:49441 - 58927 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000074666s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-084881
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-084881
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d0d5d534b34391ed9438fcde26494d33a798fae
	                    minikube.k8s.io/name=addons-084881
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_05_30T20_52_01_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-084881
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 30 May 2023 20:51:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-084881
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 30 May 2023 20:56:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 30 May 2023 20:55:34 +0000   Tue, 30 May 2023 20:51:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 30 May 2023 20:55:34 +0000   Tue, 30 May 2023 20:51:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 30 May 2023 20:55:34 +0000   Tue, 30 May 2023 20:51:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 30 May 2023 20:55:34 +0000   Tue, 30 May 2023 20:52:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-084881
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022628Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022628Ki
	  pods:               110
	System Info:
	  Machine ID:                 ebdb46a61c784da58e322d5ccf84a80e
	  System UUID:                32c0a95d-4ed7-4b24-a3c4-9fbe00414871
	  Boot ID:                    c7a134eb-0be2-46e6-bcc1-b9fd815daa7a
	  Kernel Version:             5.15.0-1036-aws
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.21
	  Kubelet Version:            v1.27.2
	  Kube-Proxy Version:         v1.27.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-65bdb79f98-44ph7         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	  gcp-auth                    gcp-auth-58478865f7-bmjjs                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m25s
	  headlamp                    headlamp-6b5756787-jnx5p                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m8s
	  kube-system                 coredns-5d78c9869d-ksg2p                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     4m36s
	  kube-system                 etcd-addons-084881                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         4m48s
	  kube-system                 kindnet-rfjr4                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m36s
	  kube-system                 kube-apiserver-addons-084881             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m48s
	  kube-system                 kube-controller-manager-addons-084881    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m48s
	  kube-system                 kube-proxy-427l8                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m36s
	  kube-system                 kube-scheduler-addons-084881             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m48s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m34s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m57s (x8 over 4m57s)  kubelet          Node addons-084881 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m57s (x8 over 4m57s)  kubelet          Node addons-084881 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m57s (x7 over 4m57s)  kubelet          Node addons-084881 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m57s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m49s                  kubelet          Node addons-084881 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m49s                  kubelet          Node addons-084881 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m49s                  kubelet          Node addons-084881 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             4m49s                  kubelet          Node addons-084881 status is now: NodeNotReady
	  Normal  Starting                 4m49s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m48s                  kubelet          Node addons-084881 status is now: NodeReady
	  Normal  RegisteredNode           4m36s                  node-controller  Node addons-084881 event: Registered Node addons-084881 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000728] FS-Cache: N-cookie c=000001cc [p=000001c3 fl=2 nc=0 na=1]
	[  +0.000975] FS-Cache: N-cookie d=00000000623fbe05{9p.inode} n=000000009d169f1a
	[  +0.001295] FS-Cache: N-key=[8] '34635c0100000000'
	[  +0.002506] FS-Cache: Duplicate cookie detected
	[  +0.000709] FS-Cache: O-cookie c=000001c6 [p=000001c3 fl=226 nc=0 na=1]
	[  +0.001023] FS-Cache: O-cookie d=00000000623fbe05{9p.inode} n=0000000053231b04
	[  +0.001170] FS-Cache: O-key=[8] '34635c0100000000'
	[  +0.000837] FS-Cache: N-cookie c=000001cd [p=000001c3 fl=2 nc=0 na=1]
	[  +0.000959] FS-Cache: N-cookie d=00000000623fbe05{9p.inode} n=00000000296365f9
	[  +0.001089] FS-Cache: N-key=[8] '34635c0100000000'
	[  +3.513956] FS-Cache: Duplicate cookie detected
	[  +0.000714] FS-Cache: O-cookie c=000001c4 [p=000001c3 fl=226 nc=0 na=1]
	[  +0.001011] FS-Cache: O-cookie d=00000000623fbe05{9p.inode} n=000000008e490977
	[  +0.001097] FS-Cache: O-key=[8] '33635c0100000000'
	[  +0.000721] FS-Cache: N-cookie c=000001cf [p=000001c3 fl=2 nc=0 na=1]
	[  +0.000957] FS-Cache: N-cookie d=00000000623fbe05{9p.inode} n=00000000e4e5b2b9
	[  +0.001067] FS-Cache: N-key=[8] '33635c0100000000'
	[  +0.410886] FS-Cache: Duplicate cookie detected
	[  +0.000719] FS-Cache: O-cookie c=000001c9 [p=000001c3 fl=226 nc=0 na=1]
	[  +0.001041] FS-Cache: O-cookie d=00000000623fbe05{9p.inode} n=000000004ccc93d9
	[  +0.001066] FS-Cache: O-key=[8] '39635c0100000000'
	[  +0.000717] FS-Cache: N-cookie c=000001d0 [p=000001c3 fl=2 nc=0 na=1]
	[  +0.000952] FS-Cache: N-cookie d=00000000623fbe05{9p.inode} n=000000009d169f1a
	[  +0.001073] FS-Cache: N-key=[8] '39635c0100000000'
	[May30 20:31] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	
	* 
	* ==> etcd [613a3ce5b62193d205696707e5572226751511429928e4165faa5be3182ba9e0] <==
	* {"level":"info","ts":"2023-05-30T20:51:53.126Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2023-05-30T20:51:53.126Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2023-05-30T20:51:53.129Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-05-30T20:51:53.130Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-05-30T20:51:53.130Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-05-30T20:51:53.130Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-05-30T20:51:53.130Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-05-30T20:51:53.213Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2023-05-30T20:51:53.214Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2023-05-30T20:51:53.214Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2023-05-30T20:51:53.214Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2023-05-30T20:51:53.214Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-05-30T20:51:53.214Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2023-05-30T20:51:53.214Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-05-30T20:51:53.217Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-30T20:51:53.219Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-084881 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-05-30T20:51:53.219Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-05-30T20:51:53.219Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-30T20:51:53.219Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-30T20:51:53.219Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-30T20:51:53.219Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-05-30T20:51:53.220Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-05-30T20:51:53.235Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-05-30T20:51:53.247Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-05-30T20:51:53.248Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> gcp-auth [c8c7332bcef1cf05a3e6ed6b8a893978e3f6139d7d559af9f6b0a97a3bfe8f89] <==
	* 2023/05/30 20:53:32 GCP Auth Webhook started!
	2023/05/30 20:53:41 Ready to marshal response ...
	2023/05/30 20:53:41 Ready to write response ...
	2023/05/30 20:53:41 Ready to marshal response ...
	2023/05/30 20:53:41 Ready to write response ...
	2023/05/30 20:53:41 Ready to marshal response ...
	2023/05/30 20:53:41 Ready to write response ...
	2023/05/30 20:53:44 Ready to marshal response ...
	2023/05/30 20:53:44 Ready to write response ...
	2023/05/30 20:54:23 Ready to marshal response ...
	2023/05/30 20:54:23 Ready to write response ...
	2023/05/30 20:54:46 Ready to marshal response ...
	2023/05/30 20:54:46 Ready to write response ...
	2023/05/30 20:55:20 Ready to marshal response ...
	2023/05/30 20:55:20 Ready to write response ...
	2023/05/30 20:55:27 Ready to marshal response ...
	2023/05/30 20:55:27 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  20:56:49 up 2 days, 38 min,  0 users,  load average: 0.73, 1.63, 2.50
	Linux addons-084881 5.15.0-1036-aws #40~20.04.1-Ubuntu SMP Mon Apr 24 00:20:54 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [af06deacbe464051e210a3f055c95c3588f886041c59ccc0c8b936cec9c6fcb3] <==
	* I0530 20:54:45.278115       1 main.go:227] handling current node
	I0530 20:54:55.301358       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0530 20:54:55.301712       1 main.go:227] handling current node
	I0530 20:55:05.314331       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0530 20:55:05.314360       1 main.go:227] handling current node
	I0530 20:55:15.318606       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0530 20:55:15.318634       1 main.go:227] handling current node
	I0530 20:55:25.330765       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0530 20:55:25.330793       1 main.go:227] handling current node
	I0530 20:55:35.343459       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0530 20:55:35.343489       1 main.go:227] handling current node
	I0530 20:55:45.347570       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0530 20:55:45.347598       1 main.go:227] handling current node
	I0530 20:55:55.357061       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0530 20:55:55.357092       1 main.go:227] handling current node
	I0530 20:56:05.361639       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0530 20:56:05.361667       1 main.go:227] handling current node
	I0530 20:56:15.421336       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0530 20:56:15.421437       1 main.go:227] handling current node
	I0530 20:56:25.428498       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0530 20:56:25.428533       1 main.go:227] handling current node
	I0530 20:56:35.434463       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0530 20:56:35.434492       1 main.go:227] handling current node
	I0530 20:56:45.447293       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0530 20:56:45.447325       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [8ec23ffd6cce04b9b6f4d47dce22553c2db522c25837a81c821cb207293f180e] <==
	* I0530 20:55:02.424237       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0530 20:55:02.441166       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0530 20:55:02.441231       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0530 20:55:02.457780       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0530 20:55:02.458002       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0530 20:55:03.329890       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0530 20:55:03.458820       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0530 20:55:03.467294       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0530 20:55:13.604861       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0530 20:55:13.619198       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0530 20:55:14.639684       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0530 20:55:19.843308       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0530 20:55:20.277723       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs=map[IPv4:10.109.81.166]
	I0530 20:55:28.076109       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs=map[IPv4:10.107.241.181]
	E0530 20:55:40.156967       1 handler_proxy.go:144] error resolving kube-system/metrics-server: service "metrics-server" not found
	W0530 20:55:40.156998       1 handler_proxy.go:100] no RequestInfo found in the context
	E0530 20:55:40.157036       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0530 20:55:40.157213       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0530 20:55:40.189766       1 controller.go:132] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	E0530 20:56:40.157474       1 handler_proxy.go:144] error resolving kube-system/metrics-server: service "metrics-server" not found
	W0530 20:56:40.157562       1 handler_proxy.go:100] no RequestInfo found in the context
	E0530 20:56:40.157619       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0530 20:56:40.157656       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [1c58be7ab9b9a60dc2a4d17caa2d6e2c41f7f2e0fac53a9a8c8a5359e64b3029] <==
	* I0530 20:55:27.831548       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-65bdb79f98 to 1"
	I0530 20:55:27.847432       1 event.go:307] "Event occurred" object="default/hello-world-app-65bdb79f98" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-65bdb79f98-44ph7"
	W0530 20:55:34.832622       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0530 20:55:34.832659       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0530 20:55:38.512134       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0530 20:55:38.512170       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0530 20:55:42.072775       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0530 20:55:42.072812       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0530 20:55:43.284959       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0530 20:55:43.285058       1 shared_informer.go:318] Caches are synced for resource quota
	I0530 20:55:44.586961       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-create
	I0530 20:55:44.602547       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-patch
	W0530 20:55:46.079616       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0530 20:55:46.079654       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0530 20:55:50.964709       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0530 20:55:50.964742       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0530 20:55:54.641133       1 namespace_controller.go:182] "Namespace has been deleted" namespace="ingress-nginx"
	W0530 20:56:13.687156       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0530 20:56:13.687276       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0530 20:56:23.825397       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0530 20:56:23.825433       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0530 20:56:28.744584       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0530 20:56:28.744622       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0530 20:56:29.922636       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0530 20:56:29.922670       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [eb52e16f0bf4fd3420874af03cdfa2de7cce1c26b1df997e5d916dd56f889860] <==
	* I0530 20:52:14.643037       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0530 20:52:14.643146       1 server_others.go:110] "Detected node IP" address="192.168.49.2"
	I0530 20:52:14.643170       1 server_others.go:551] "Using iptables proxy"
	I0530 20:52:14.682528       1 server_others.go:190] "Using iptables Proxier"
	I0530 20:52:14.682567       1 server_others.go:197] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0530 20:52:14.682576       1 server_others.go:198] "Creating dualStackProxier for iptables"
	I0530 20:52:14.682589       1 server_others.go:481] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0530 20:52:14.682654       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0530 20:52:14.683208       1 server.go:657] "Version info" version="v1.27.2"
	I0530 20:52:14.683222       1 server.go:659] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0530 20:52:14.688912       1 config.go:188] "Starting service config controller"
	I0530 20:52:14.688938       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0530 20:52:14.688967       1 config.go:97] "Starting endpoint slice config controller"
	I0530 20:52:14.688989       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0530 20:52:14.690281       1 config.go:315] "Starting node config controller"
	I0530 20:52:14.690295       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0530 20:52:14.789954       1 shared_informer.go:318] Caches are synced for service config
	I0530 20:52:14.789942       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0530 20:52:14.790541       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [082669e89f69c1be4381226272cd9592929ec10e38177b6b7c6a68cbd4a02017] <==
	* W0530 20:51:57.392999       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0530 20:51:57.393106       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0530 20:51:57.393026       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0530 20:51:57.393189       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0530 20:51:57.393250       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0530 20:51:57.393351       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0530 20:51:57.393410       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0530 20:51:57.393356       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0530 20:51:57.393582       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0530 20:51:57.393601       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0530 20:51:58.319511       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0530 20:51:58.319757       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0530 20:51:58.337207       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0530 20:51:58.337245       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0530 20:51:58.343950       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0530 20:51:58.343987       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0530 20:51:58.360726       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0530 20:51:58.360770       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0530 20:51:58.366675       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0530 20:51:58.366713       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0530 20:51:58.509998       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0530 20:51:58.510042       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0530 20:51:58.538321       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0530 20:51:58.538365       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0530 20:52:00.174684       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* May 30 20:56:27 addons-084881 kubelet[1349]: I0530 20:56:27.835338    1349 scope.go:115] "RemoveContainer" containerID="00016db86b9008805b0fbeba0fc09972d72ac3180b2aac53c47a62107b581a03"
	May 30 20:56:27 addons-084881 kubelet[1349]: E0530 20:56:27.836240    1349 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 40s restarting failed container=hello-world-app pod=hello-world-app-65bdb79f98-44ph7_default(5805efdb-3dd5-4083-877b-efafae72403e)\"" pod="default/hello-world-app-65bdb79f98-44ph7" podUID=5805efdb-3dd5-4083-877b-efafae72403e
	May 30 20:56:30 addons-084881 kubelet[1349]: I0530 20:56:30.835442    1349 kubelet_pods.go:894] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-d7x5f" secret="" err="secret \"gcp-auth\" not found"
	May 30 20:56:30 addons-084881 kubelet[1349]: I0530 20:56:30.835490    1349 scope.go:115] "RemoveContainer" containerID="551bea2df637f3ba8622e71aa5eb85655dadba6002dc82d53fc39131a5935468"
	May 30 20:56:30 addons-084881 kubelet[1349]: E0530 20:56:30.835756    1349 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=registry-proxy pod=registry-proxy-d7x5f_kube-system(f4513549-bfa8-495e-8b35-eee656d4eb84)\"" pod="kube-system/registry-proxy-d7x5f" podUID=f4513549-bfa8-495e-8b35-eee656d4eb84
	May 30 20:56:38 addons-084881 kubelet[1349]: I0530 20:56:38.837362    1349 scope.go:115] "RemoveContainer" containerID="00016db86b9008805b0fbeba0fc09972d72ac3180b2aac53c47a62107b581a03"
	May 30 20:56:38 addons-084881 kubelet[1349]: E0530 20:56:38.838112    1349 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 40s restarting failed container=hello-world-app pod=hello-world-app-65bdb79f98-44ph7_default(5805efdb-3dd5-4083-877b-efafae72403e)\"" pod="default/hello-world-app-65bdb79f98-44ph7" podUID=5805efdb-3dd5-4083-877b-efafae72403e
	May 30 20:56:44 addons-084881 kubelet[1349]: I0530 20:56:44.835377    1349 kubelet_pods.go:894] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-d7x5f" secret="" err="secret \"gcp-auth\" not found"
	May 30 20:56:44 addons-084881 kubelet[1349]: I0530 20:56:44.835882    1349 scope.go:115] "RemoveContainer" containerID="551bea2df637f3ba8622e71aa5eb85655dadba6002dc82d53fc39131a5935468"
	May 30 20:56:44 addons-084881 kubelet[1349]: E0530 20:56:44.836235    1349 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=registry-proxy pod=registry-proxy-d7x5f_kube-system(f4513549-bfa8-495e-8b35-eee656d4eb84)\"" pod="kube-system/registry-proxy-d7x5f" podUID=f4513549-bfa8-495e-8b35-eee656d4eb84
	May 30 20:56:48 addons-084881 kubelet[1349]: I0530 20:56:48.255224    1349 scope.go:115] "RemoveContainer" containerID="551bea2df637f3ba8622e71aa5eb85655dadba6002dc82d53fc39131a5935468"
	May 30 20:56:48 addons-084881 kubelet[1349]: I0530 20:56:48.269854    1349 scope.go:115] "RemoveContainer" containerID="806f0099e7aac62515a01eaf49267d41d0c5ef89073ed48408af356df78cf5de"
	May 30 20:56:48 addons-084881 kubelet[1349]: I0530 20:56:48.283818    1349 scope.go:115] "RemoveContainer" containerID="806f0099e7aac62515a01eaf49267d41d0c5ef89073ed48408af356df78cf5de"
	May 30 20:56:48 addons-084881 kubelet[1349]: E0530 20:56:48.284701    1349 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"806f0099e7aac62515a01eaf49267d41d0c5ef89073ed48408af356df78cf5de\": not found" containerID="806f0099e7aac62515a01eaf49267d41d0c5ef89073ed48408af356df78cf5de"
	May 30 20:56:48 addons-084881 kubelet[1349]: I0530 20:56:48.284784    1349 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:806f0099e7aac62515a01eaf49267d41d0c5ef89073ed48408af356df78cf5de} err="failed to get container status \"806f0099e7aac62515a01eaf49267d41d0c5ef89073ed48408af356df78cf5de\": rpc error: code = NotFound desc = an error occurred when try to find container \"806f0099e7aac62515a01eaf49267d41d0c5ef89073ed48408af356df78cf5de\": not found"
	May 30 20:56:48 addons-084881 kubelet[1349]: I0530 20:56:48.304940    1349 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5w48n\" (UniqueName: \"kubernetes.io/projected/80ac9d6c-8eb3-46f0-b2fd-9d0e5d032568-kube-api-access-5w48n\") pod \"80ac9d6c-8eb3-46f0-b2fd-9d0e5d032568\" (UID: \"80ac9d6c-8eb3-46f0-b2fd-9d0e5d032568\") "
	May 30 20:56:48 addons-084881 kubelet[1349]: I0530 20:56:48.305243    1349 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nphcn\" (UniqueName: \"kubernetes.io/projected/f4513549-bfa8-495e-8b35-eee656d4eb84-kube-api-access-nphcn\") pod \"f4513549-bfa8-495e-8b35-eee656d4eb84\" (UID: \"f4513549-bfa8-495e-8b35-eee656d4eb84\") "
	May 30 20:56:48 addons-084881 kubelet[1349]: I0530 20:56:48.307997    1349 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4513549-bfa8-495e-8b35-eee656d4eb84-kube-api-access-nphcn" (OuterVolumeSpecName: "kube-api-access-nphcn") pod "f4513549-bfa8-495e-8b35-eee656d4eb84" (UID: "f4513549-bfa8-495e-8b35-eee656d4eb84"). InnerVolumeSpecName "kube-api-access-nphcn". PluginName "kubernetes.io/projected", VolumeGidValue ""
	May 30 20:56:48 addons-084881 kubelet[1349]: I0530 20:56:48.310893    1349 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80ac9d6c-8eb3-46f0-b2fd-9d0e5d032568-kube-api-access-5w48n" (OuterVolumeSpecName: "kube-api-access-5w48n") pod "80ac9d6c-8eb3-46f0-b2fd-9d0e5d032568" (UID: "80ac9d6c-8eb3-46f0-b2fd-9d0e5d032568"). InnerVolumeSpecName "kube-api-access-5w48n". PluginName "kubernetes.io/projected", VolumeGidValue ""
	May 30 20:56:48 addons-084881 kubelet[1349]: I0530 20:56:48.406161    1349 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-5w48n\" (UniqueName: \"kubernetes.io/projected/80ac9d6c-8eb3-46f0-b2fd-9d0e5d032568-kube-api-access-5w48n\") on node \"addons-084881\" DevicePath \"\""
	May 30 20:56:48 addons-084881 kubelet[1349]: I0530 20:56:48.406203    1349 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-nphcn\" (UniqueName: \"kubernetes.io/projected/f4513549-bfa8-495e-8b35-eee656d4eb84-kube-api-access-nphcn\") on node \"addons-084881\" DevicePath \"\""
	May 30 20:56:48 addons-084881 kubelet[1349]: I0530 20:56:48.838330    1349 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=80ac9d6c-8eb3-46f0-b2fd-9d0e5d032568 path="/var/lib/kubelet/pods/80ac9d6c-8eb3-46f0-b2fd-9d0e5d032568/volumes"
	May 30 20:56:48 addons-084881 kubelet[1349]: I0530 20:56:48.838758    1349 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=f4513549-bfa8-495e-8b35-eee656d4eb84 path="/var/lib/kubelet/pods/f4513549-bfa8-495e-8b35-eee656d4eb84/volumes"
	May 30 20:56:49 addons-084881 kubelet[1349]: I0530 20:56:49.835293    1349 scope.go:115] "RemoveContainer" containerID="00016db86b9008805b0fbeba0fc09972d72ac3180b2aac53c47a62107b581a03"
	May 30 20:56:49 addons-084881 kubelet[1349]: E0530 20:56:49.835593    1349 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 40s restarting failed container=hello-world-app pod=hello-world-app-65bdb79f98-44ph7_default(5805efdb-3dd5-4083-877b-efafae72403e)\"" pod="default/hello-world-app-65bdb79f98-44ph7" podUID=5805efdb-3dd5-4083-877b-efafae72403e
	
	* 
	* ==> storage-provisioner [3d719535e72b7bd312068a00b86c89faa511de8b0be3bb3a8a44db591b3684bb] <==
	* I0530 20:52:18.465336       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0530 20:52:18.493704       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0530 20:52:18.493803       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0530 20:52:18.510326       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0530 20:52:18.510553       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-084881_28e3c32e-0519-47a0-aaab-5fb678849377!
	I0530 20:52:18.517740       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a6b5db0f-6381-4a92-bc9a-a880d59eb8bd", APIVersion:"v1", ResourceVersion:"508", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-084881_28e3c32e-0519-47a0-aaab-5fb678849377 became leader
	I0530 20:52:18.611223       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-084881_28e3c32e-0519-47a0-aaab-5fb678849377!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-084881 -n addons-084881
helpers_test.go:261: (dbg) Run:  kubectl --context addons-084881 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (196.20s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (35.54s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-084881 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:208: (dbg) Run:  kubectl --context addons-084881 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context addons-084881 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [ca8e8cfc-0dc4-40ef-b942-ab2c2a76ada8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [ca8e8cfc-0dc4-40ef-b942-ab2c2a76ada8] Running
2023/05/30 20:55:22 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/30 20:55:26 [DEBUG] GET http://192.168.49.2:5000
2023/05/30 20:55:26 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/30 20:55:26 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 7.013118605s
addons_test.go:238: (dbg) Run:  out/minikube-linux-arm64 -p addons-084881 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
2023/05/30 20:55:27 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/30 20:55:27 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
addons_test.go:262: (dbg) Run:  kubectl --context addons-084881 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-arm64 -p addons-084881 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
2023/05/30 20:55:29 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/30 20:55:29 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2023/05/30 20:55:33 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/30 20:55:33 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2023/05/30 20:55:41 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.066313955s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p addons-084881 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:287: (dbg) Run:  out/minikube-linux-arm64 -p addons-084881 addons disable ingress --alsologtostderr -v=1
2023/05/30 20:55:46 [DEBUG] GET http://192.168.49.2:5000
2023/05/30 20:55:46 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/30 20:55:46 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2023/05/30 20:55:47 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/30 20:55:47 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2023/05/30 20:55:49 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/30 20:55:49 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
addons_test.go:287: (dbg) Done: out/minikube-linux-arm64 -p addons-084881 addons disable ingress --alsologtostderr -v=1: (7.542870827s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-084881
helpers_test.go:235: (dbg) docker inspect addons-084881:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "859d084e65f35554263ae45fd359047de7083a1b5c7ada40731af16fcba330d8",
	        "Created": "2023-05-30T20:51:36.962467612Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2295261,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-05-30T20:51:37.28701817Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:dee4774de4f99268e16c76379a36a4607bed47635a069a7e60c17cd24d9aaa76",
	        "ResolvConfPath": "/var/lib/docker/containers/859d084e65f35554263ae45fd359047de7083a1b5c7ada40731af16fcba330d8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/859d084e65f35554263ae45fd359047de7083a1b5c7ada40731af16fcba330d8/hostname",
	        "HostsPath": "/var/lib/docker/containers/859d084e65f35554263ae45fd359047de7083a1b5c7ada40731af16fcba330d8/hosts",
	        "LogPath": "/var/lib/docker/containers/859d084e65f35554263ae45fd359047de7083a1b5c7ada40731af16fcba330d8/859d084e65f35554263ae45fd359047de7083a1b5c7ada40731af16fcba330d8-json.log",
	        "Name": "/addons-084881",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-084881:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-084881",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/8b9bff5ce4d6c35ee49462d36f9f81eedd7d30091b5ef90d5a4ee237ca4154cc-init/diff:/var/lib/docker/overlay2/e2ed5c199a0c2e09246fd5671b525fc670ce3dff10bd06ad0c2ad37b9496c295/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8b9bff5ce4d6c35ee49462d36f9f81eedd7d30091b5ef90d5a4ee237ca4154cc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8b9bff5ce4d6c35ee49462d36f9f81eedd7d30091b5ef90d5a4ee237ca4154cc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8b9bff5ce4d6c35ee49462d36f9f81eedd7d30091b5ef90d5a4ee237ca4154cc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-084881",
	                "Source": "/var/lib/docker/volumes/addons-084881/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-084881",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-084881",
	                "name.minikube.sigs.k8s.io": "addons-084881",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5b2aa69c4931017eb29496833d122325d44b5b0e2be022386fa049f9b6e6bb54",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "40946"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "40945"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "40942"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "40944"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "40943"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/5b2aa69c4931",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-084881": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "859d084e65f3",
	                        "addons-084881"
	                    ],
	                    "NetworkID": "c6f377c8d6cc1177d9b92e2a53dac44d9a269723fb79fe723bcc8795800df6da",
	                    "EndpointID": "a45f38a2f9a969cf149af0f47c5702d7de9d91e3a3aaac71dfc1590433845cce",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-084881 -n addons-084881
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-084881 logs -n 25
2023/05/30 20:55:53 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/30 20:55:53 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-084881 logs -n 25: (1.655563965s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-942566   | jenkins | v1.30.1 | 30 May 23 20:50 UTC |                     |
	|         | -p download-only-942566        |                        |         |         |                     |                     |
	|         | --force --alsologtostderr      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                        |         |         |                     |                     |
	|         | --container-runtime=containerd |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=containerd |                        |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-942566   | jenkins | v1.30.1 | 30 May 23 20:51 UTC |                     |
	|         | -p download-only-942566        |                        |         |         |                     |                     |
	|         | --force --alsologtostderr      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.2   |                        |         |         |                     |                     |
	|         | --container-runtime=containerd |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=containerd |                        |         |         |                     |                     |
	| delete  | --all                          | minikube               | jenkins | v1.30.1 | 30 May 23 20:51 UTC | 30 May 23 20:51 UTC |
	| delete  | -p download-only-942566        | download-only-942566   | jenkins | v1.30.1 | 30 May 23 20:51 UTC | 30 May 23 20:51 UTC |
	| delete  | -p download-only-942566        | download-only-942566   | jenkins | v1.30.1 | 30 May 23 20:51 UTC | 30 May 23 20:51 UTC |
	| start   | --download-only -p             | download-docker-074087 | jenkins | v1.30.1 | 30 May 23 20:51 UTC |                     |
	|         | download-docker-074087         |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=containerd |                        |         |         |                     |                     |
	| delete  | -p download-docker-074087      | download-docker-074087 | jenkins | v1.30.1 | 30 May 23 20:51 UTC | 30 May 23 20:51 UTC |
	| start   | --download-only -p             | binary-mirror-812172   | jenkins | v1.30.1 | 30 May 23 20:51 UTC |                     |
	|         | binary-mirror-812172           |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --binary-mirror                |                        |         |         |                     |                     |
	|         | http://127.0.0.1:42239         |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=containerd |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-812172        | binary-mirror-812172   | jenkins | v1.30.1 | 30 May 23 20:51 UTC | 30 May 23 20:51 UTC |
	| start   | -p addons-084881               | addons-084881          | jenkins | v1.30.1 | 30 May 23 20:51 UTC | 30 May 23 20:53 UTC |
	|         | --wait=true --memory=4000      |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --addons=registry              |                        |         |         |                     |                     |
	|         | --addons=metrics-server        |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                        |         |         |                     |                     |
	|         | --addons=gcp-auth              |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=containerd |                        |         |         |                     |                     |
	|         | --addons=ingress               |                        |         |         |                     |                     |
	|         | --addons=ingress-dns           |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-084881          | jenkins | v1.30.1 | 30 May 23 20:53 UTC | 30 May 23 20:53 UTC |
	|         | addons-084881                  |                        |         |         |                     |                     |
	| addons  | enable headlamp                | addons-084881          | jenkins | v1.30.1 | 30 May 23 20:53 UTC | 30 May 23 20:53 UTC |
	|         | -p addons-084881               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| ip      | addons-084881 ip               | addons-084881          | jenkins | v1.30.1 | 30 May 23 20:54 UTC | 30 May 23 20:54 UTC |
	| addons  | addons-084881 addons           | addons-084881          | jenkins | v1.30.1 | 30 May 23 20:54 UTC | 30 May 23 20:55 UTC |
	|         | disable csi-hostpath-driver    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons  | addons-084881 addons           | addons-084881          | jenkins | v1.30.1 | 30 May 23 20:55 UTC | 30 May 23 20:55 UTC |
	|         | disable volumesnapshots        |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons  | addons-084881 addons           | addons-084881          | jenkins | v1.30.1 | 30 May 23 20:55 UTC | 30 May 23 20:55 UTC |
	|         | disable metrics-server         |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p    | addons-084881          | jenkins | v1.30.1 | 30 May 23 20:55 UTC | 30 May 23 20:55 UTC |
	|         | addons-084881                  |                        |         |         |                     |                     |
	| ssh     | addons-084881 ssh curl -s      | addons-084881          | jenkins | v1.30.1 | 30 May 23 20:55 UTC | 30 May 23 20:55 UTC |
	|         | http://127.0.0.1/ -H 'Host:    |                        |         |         |                     |                     |
	|         | nginx.example.com'             |                        |         |         |                     |                     |
	| ip      | addons-084881 ip               | addons-084881          | jenkins | v1.30.1 | 30 May 23 20:55 UTC | 30 May 23 20:55 UTC |
	| addons  | addons-084881 addons disable   | addons-084881          | jenkins | v1.30.1 | 30 May 23 20:55 UTC | 30 May 23 20:55 UTC |
	|         | ingress-dns --alsologtostderr  |                        |         |         |                     |                     |
	|         | -v=1                           |                        |         |         |                     |                     |
	| addons  | addons-084881 addons disable   | addons-084881          | jenkins | v1.30.1 | 30 May 23 20:55 UTC | 30 May 23 20:55 UTC |
	|         | ingress --alsologtostderr -v=1 |                        |         |         |                     |                     |
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/05/30 20:51:14
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.20.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0530 20:51:14.525064 2294803 out.go:296] Setting OutFile to fd 1 ...
	I0530 20:51:14.525283 2294803 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 20:51:14.525325 2294803 out.go:309] Setting ErrFile to fd 2...
	I0530 20:51:14.525347 2294803 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 20:51:14.525538 2294803 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16597-2288886/.minikube/bin
	I0530 20:51:14.526022 2294803 out.go:303] Setting JSON to false
	I0530 20:51:14.527104 2294803 start.go:125] hostinfo: {"hostname":"ip-172-31-31-251","uptime":174774,"bootTime":1685305101,"procs":316,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1036-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0530 20:51:14.527199 2294803 start.go:135] virtualization:  
	I0530 20:51:14.529740 2294803 out.go:177] * [addons-084881] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	I0530 20:51:14.532207 2294803 out.go:177]   - MINIKUBE_LOCATION=16597
	I0530 20:51:14.532281 2294803 notify.go:220] Checking for updates...
	I0530 20:51:14.533771 2294803 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0530 20:51:14.536143 2294803 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16597-2288886/kubeconfig
	I0530 20:51:14.538039 2294803 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16597-2288886/.minikube
	I0530 20:51:14.539967 2294803 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0530 20:51:14.542026 2294803 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0530 20:51:14.543759 2294803 driver.go:375] Setting default libvirt URI to qemu:///system
	I0530 20:51:14.567466 2294803 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0530 20:51:14.567563 2294803 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0530 20:51:14.643315 2294803 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-05-30 20:51:14.632583997 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1036-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0530 20:51:14.643419 2294803 docker.go:294] overlay module found
	I0530 20:51:14.645632 2294803 out.go:177] * Using the docker driver based on user configuration
	I0530 20:51:14.647452 2294803 start.go:295] selected driver: docker
	I0530 20:51:14.647483 2294803 start.go:870] validating driver "docker" against <nil>
	I0530 20:51:14.647499 2294803 start.go:881] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0530 20:51:14.648138 2294803 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0530 20:51:14.709167 2294803 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-05-30 20:51:14.698969994 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1036-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0530 20:51:14.709290 2294803 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0530 20:51:14.709562 2294803 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0530 20:51:14.711572 2294803 out.go:177] * Using Docker driver with root privileges
	I0530 20:51:14.713413 2294803 cni.go:84] Creating CNI manager for ""
	I0530 20:51:14.713432 2294803 cni.go:142] "docker" driver + "containerd" runtime found, recommending kindnet
	I0530 20:51:14.713447 2294803 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0530 20:51:14.713458 2294803 start_flags.go:319] config:
	{Name:addons-084881 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:addons-084881 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containe
rd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0530 20:51:14.715885 2294803 out.go:177] * Starting control plane node addons-084881 in cluster addons-084881
	I0530 20:51:14.717788 2294803 cache.go:122] Beginning downloading kic base image for docker with containerd
	I0530 20:51:14.719692 2294803 out.go:177] * Pulling base image ...
	I0530 20:51:14.721507 2294803 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime containerd
	I0530 20:51:14.721561 2294803 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 in local docker daemon
	I0530 20:51:14.721563 2294803 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16597-2288886/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-containerd-overlay2-arm64.tar.lz4
	I0530 20:51:14.721675 2294803 cache.go:57] Caching tarball of preloaded images
	I0530 20:51:14.721760 2294803 preload.go:174] Found /home/jenkins/minikube-integration/16597-2288886/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0530 20:51:14.721772 2294803 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on containerd
	I0530 20:51:14.722120 2294803 profile.go:148] Saving config to /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/config.json ...
	I0530 20:51:14.722149 2294803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/config.json: {Name:mka9807e848cdc8a23dfc97f970cd105bb0e97be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 20:51:14.738801 2294803 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 to local cache
	I0530 20:51:14.738908 2294803 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 in local cache directory
	I0530 20:51:14.738932 2294803 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 in local cache directory, skipping pull
	I0530 20:51:14.738941 2294803 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 exists in cache, skipping pull
	I0530 20:51:14.738948 2294803 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 as a tarball
	I0530 20:51:14.738953 2294803 cache.go:163] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 from local cache
	I0530 20:51:30.132927 2294803 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 from cached tarball
	I0530 20:51:30.132979 2294803 cache.go:195] Successfully downloaded all kic artifacts
	I0530 20:51:30.133030 2294803 start.go:364] acquiring machines lock for addons-084881: {Name:mk7b1640b8054b7efbe4cbca84ab1b62233c8a44 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 20:51:30.133168 2294803 start.go:368] acquired machines lock for "addons-084881" in 116.381µs
	I0530 20:51:30.133199 2294803 start.go:93] Provisioning new machine with config: &{Name:addons-084881 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:addons-084881 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0530 20:51:30.133281 2294803 start.go:125] createHost starting for "" (driver="docker")
	I0530 20:51:30.135684 2294803 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0530 20:51:30.135949 2294803 start.go:159] libmachine.API.Create for "addons-084881" (driver="docker")
	I0530 20:51:30.135976 2294803 client.go:168] LocalClient.Create starting
	I0530 20:51:30.136120 2294803 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/16597-2288886/.minikube/certs/ca.pem
	I0530 20:51:30.569090 2294803 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/16597-2288886/.minikube/certs/cert.pem
	I0530 20:51:31.012014 2294803 cli_runner.go:164] Run: docker network inspect addons-084881 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0530 20:51:31.029942 2294803 cli_runner.go:211] docker network inspect addons-084881 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0530 20:51:31.030024 2294803 network_create.go:281] running [docker network inspect addons-084881] to gather additional debugging logs...
	I0530 20:51:31.030040 2294803 cli_runner.go:164] Run: docker network inspect addons-084881
	W0530 20:51:31.047935 2294803 cli_runner.go:211] docker network inspect addons-084881 returned with exit code 1
	I0530 20:51:31.047965 2294803 network_create.go:284] error running [docker network inspect addons-084881]: docker network inspect addons-084881: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-084881 not found
	I0530 20:51:31.047976 2294803 network_create.go:286] output of [docker network inspect addons-084881]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-084881 not found
	
	** /stderr **
	I0530 20:51:31.048056 2294803 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0530 20:51:31.068496 2294803 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40011a06e0}
	I0530 20:51:31.068533 2294803 network_create.go:123] attempt to create docker network addons-084881 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0530 20:51:31.068592 2294803 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-084881 addons-084881
	I0530 20:51:31.139525 2294803 network_create.go:107] docker network addons-084881 192.168.49.0/24 created
	I0530 20:51:31.139557 2294803 kic.go:117] calculated static IP "192.168.49.2" for the "addons-084881" container
	I0530 20:51:31.139631 2294803 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0530 20:51:31.156135 2294803 cli_runner.go:164] Run: docker volume create addons-084881 --label name.minikube.sigs.k8s.io=addons-084881 --label created_by.minikube.sigs.k8s.io=true
	I0530 20:51:31.175049 2294803 oci.go:103] Successfully created a docker volume addons-084881
	I0530 20:51:31.175136 2294803 cli_runner.go:164] Run: docker run --rm --name addons-084881-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-084881 --entrypoint /usr/bin/test -v addons-084881:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 -d /var/lib
	I0530 20:51:32.859694 2294803 cli_runner.go:217] Completed: docker run --rm --name addons-084881-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-084881 --entrypoint /usr/bin/test -v addons-084881:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 -d /var/lib: (1.684508136s)
	I0530 20:51:32.859746 2294803 oci.go:107] Successfully prepared a docker volume addons-084881
	I0530 20:51:32.859771 2294803 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime containerd
	I0530 20:51:32.859789 2294803 kic.go:190] Starting extracting preloaded images to volume ...
	I0530 20:51:32.859881 2294803 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16597-2288886/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-084881:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 -I lz4 -xf /preloaded.tar -C /extractDir
	I0530 20:51:36.882582 2294803 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16597-2288886/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-084881:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.022643992s)
	I0530 20:51:36.882615 2294803 kic.go:199] duration metric: took 4.022823 seconds to extract preloaded images to volume
	W0530 20:51:36.882752 2294803 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0530 20:51:36.882866 2294803 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0530 20:51:36.946557 2294803 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-084881 --name addons-084881 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-084881 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-084881 --network addons-084881 --ip 192.168.49.2 --volume addons-084881:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8
	I0530 20:51:37.295674 2294803 cli_runner.go:164] Run: docker container inspect addons-084881 --format={{.State.Running}}
	I0530 20:51:37.330180 2294803 cli_runner.go:164] Run: docker container inspect addons-084881 --format={{.State.Status}}
	I0530 20:51:37.353514 2294803 cli_runner.go:164] Run: docker exec addons-084881 stat /var/lib/dpkg/alternatives/iptables
	I0530 20:51:37.433202 2294803 oci.go:144] the created container "addons-084881" has a running status.
	I0530 20:51:37.433227 2294803 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16597-2288886/.minikube/machines/addons-084881/id_rsa...
	I0530 20:51:37.882047 2294803 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16597-2288886/.minikube/machines/addons-084881/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0530 20:51:37.910074 2294803 cli_runner.go:164] Run: docker container inspect addons-084881 --format={{.State.Status}}
	I0530 20:51:37.941189 2294803 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0530 20:51:37.941209 2294803 kic_runner.go:114] Args: [docker exec --privileged addons-084881 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0530 20:51:38.054620 2294803 cli_runner.go:164] Run: docker container inspect addons-084881 --format={{.State.Status}}
	I0530 20:51:38.094276 2294803 machine.go:88] provisioning docker machine ...
	I0530 20:51:38.094306 2294803 ubuntu.go:169] provisioning hostname "addons-084881"
	I0530 20:51:38.094396 2294803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-084881
	I0530 20:51:38.132475 2294803 main.go:141] libmachine: Using SSH client type: native
	I0530 20:51:38.132932 2294803 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 40946 <nil> <nil>}
	I0530 20:51:38.132944 2294803 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-084881 && echo "addons-084881" | sudo tee /etc/hostname
	I0530 20:51:38.398342 2294803 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-084881
	
	I0530 20:51:38.398486 2294803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-084881
	I0530 20:51:38.429195 2294803 main.go:141] libmachine: Using SSH client type: native
	I0530 20:51:38.429719 2294803 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 40946 <nil> <nil>}
	I0530 20:51:38.429739 2294803 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-084881' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-084881/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-084881' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0530 20:51:38.582573 2294803 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0530 20:51:38.582635 2294803 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16597-2288886/.minikube CaCertPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16597-2288886/.minikube}
	I0530 20:51:38.582675 2294803 ubuntu.go:177] setting up certificates
	I0530 20:51:38.582695 2294803 provision.go:83] configureAuth start
	I0530 20:51:38.582770 2294803 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-084881
	I0530 20:51:38.604382 2294803 provision.go:138] copyHostCerts
	I0530 20:51:38.604449 2294803 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16597-2288886/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16597-2288886/.minikube/ca.pem (1078 bytes)
	I0530 20:51:38.604559 2294803 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16597-2288886/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16597-2288886/.minikube/cert.pem (1123 bytes)
	I0530 20:51:38.604614 2294803 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16597-2288886/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16597-2288886/.minikube/key.pem (1679 bytes)
	I0530 20:51:38.604660 2294803 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16597-2288886/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16597-2288886/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16597-2288886/.minikube/certs/ca-key.pem org=jenkins.addons-084881 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-084881]
	I0530 20:51:38.852279 2294803 provision.go:172] copyRemoteCerts
	I0530 20:51:38.852348 2294803 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0530 20:51:38.852395 2294803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-084881
	I0530 20:51:38.870458 2294803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40946 SSHKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/machines/addons-084881/id_rsa Username:docker}
	I0530 20:51:38.964256 2294803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16597-2288886/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0530 20:51:38.994757 2294803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16597-2288886/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0530 20:51:39.026559 2294803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16597-2288886/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0530 20:51:39.057592 2294803 provision.go:86] duration metric: configureAuth took 474.870917ms
	I0530 20:51:39.057622 2294803 ubuntu.go:193] setting minikube options for container-runtime
	I0530 20:51:39.057822 2294803 config.go:182] Loaded profile config "addons-084881": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.2
	I0530 20:51:39.057844 2294803 machine.go:91] provisioned docker machine in 963.55083ms
	I0530 20:51:39.057850 2294803 client.go:171] LocalClient.Create took 8.921869104s
	I0530 20:51:39.057869 2294803 start.go:167] duration metric: libmachine.API.Create for "addons-084881" took 8.921921617s
	I0530 20:51:39.057879 2294803 start.go:300] post-start starting for "addons-084881" (driver="docker")
	I0530 20:51:39.057885 2294803 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0530 20:51:39.057954 2294803 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0530 20:51:39.058000 2294803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-084881
	I0530 20:51:39.076041 2294803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40946 SSHKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/machines/addons-084881/id_rsa Username:docker}
	I0530 20:51:39.172300 2294803 ssh_runner.go:195] Run: cat /etc/os-release
	I0530 20:51:39.176421 2294803 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0530 20:51:39.176457 2294803 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0530 20:51:39.176469 2294803 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0530 20:51:39.176475 2294803 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0530 20:51:39.176484 2294803 filesync.go:126] Scanning /home/jenkins/minikube-integration/16597-2288886/.minikube/addons for local assets ...
	I0530 20:51:39.176560 2294803 filesync.go:126] Scanning /home/jenkins/minikube-integration/16597-2288886/.minikube/files for local assets ...
	I0530 20:51:39.176585 2294803 start.go:303] post-start completed in 118.700692ms
	I0530 20:51:39.176899 2294803 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-084881
	I0530 20:51:39.195732 2294803 profile.go:148] Saving config to /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/config.json ...
	I0530 20:51:39.196038 2294803 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0530 20:51:39.196083 2294803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-084881
	I0530 20:51:39.219793 2294803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40946 SSHKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/machines/addons-084881/id_rsa Username:docker}
	I0530 20:51:39.315542 2294803 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0530 20:51:39.321639 2294803 start.go:128] duration metric: createHost completed in 9.188343878s
	I0530 20:51:39.321663 2294803 start.go:83] releasing machines lock for "addons-084881", held for 9.188483496s
	I0530 20:51:39.321734 2294803 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-084881
	I0530 20:51:39.339454 2294803 ssh_runner.go:195] Run: cat /version.json
	I0530 20:51:39.339472 2294803 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0530 20:51:39.339510 2294803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-084881
	I0530 20:51:39.339535 2294803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-084881
	I0530 20:51:39.362699 2294803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40946 SSHKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/machines/addons-084881/id_rsa Username:docker}
	I0530 20:51:39.364485 2294803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40946 SSHKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/machines/addons-084881/id_rsa Username:docker}
	I0530 20:51:39.592183 2294803 ssh_runner.go:195] Run: systemctl --version
	I0530 20:51:39.598240 2294803 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0530 20:51:39.604368 2294803 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0530 20:51:39.636275 2294803 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0530 20:51:39.636361 2294803 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0530 20:51:39.670961 2294803 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0530 20:51:39.670985 2294803 start.go:481] detecting cgroup driver to use...
	I0530 20:51:39.671039 2294803 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0530 20:51:39.671114 2294803 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0530 20:51:39.686257 2294803 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0530 20:51:39.700277 2294803 docker.go:193] disabling cri-docker service (if available) ...
	I0530 20:51:39.700367 2294803 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0530 20:51:39.716693 2294803 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0530 20:51:39.734038 2294803 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0530 20:51:39.834037 2294803 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0530 20:51:39.930409 2294803 docker.go:209] disabling docker service ...
	I0530 20:51:39.930498 2294803 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0530 20:51:39.953749 2294803 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0530 20:51:39.967855 2294803 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0530 20:51:40.075440 2294803 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0530 20:51:40.178794 2294803 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0530 20:51:40.193378 2294803 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0530 20:51:40.214923 2294803 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0530 20:51:40.228059 2294803 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0530 20:51:40.241473 2294803 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0530 20:51:40.241592 2294803 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0530 20:51:40.255288 2294803 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0530 20:51:40.268487 2294803 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0530 20:51:40.282029 2294803 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0530 20:51:40.295866 2294803 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0530 20:51:40.308986 2294803 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0530 20:51:40.322648 2294803 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0530 20:51:40.333336 2294803 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0530 20:51:40.343989 2294803 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0530 20:51:40.455303 2294803 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0530 20:51:40.538924 2294803 start.go:528] Will wait 60s for socket path /run/containerd/containerd.sock
	I0530 20:51:40.539011 2294803 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0530 20:51:40.544870 2294803 start.go:549] Will wait 60s for crictl version
	I0530 20:51:40.544982 2294803 ssh_runner.go:195] Run: which crictl
	I0530 20:51:40.550161 2294803 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0530 20:51:40.615902 2294803 start.go:565] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.21
	RuntimeApiVersion:  v1
	I0530 20:51:40.616058 2294803 ssh_runner.go:195] Run: containerd --version
	I0530 20:51:40.649498 2294803 ssh_runner.go:195] Run: containerd --version
	I0530 20:51:40.681971 2294803 out.go:177] * Preparing Kubernetes v1.27.2 on containerd 1.6.21 ...
	I0530 20:51:40.683833 2294803 cli_runner.go:164] Run: docker network inspect addons-084881 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0530 20:51:40.702684 2294803 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0530 20:51:40.707396 2294803 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0530 20:51:40.722856 2294803 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime containerd
	I0530 20:51:40.722927 2294803 ssh_runner.go:195] Run: sudo crictl images --output json
	I0530 20:51:40.765914 2294803 containerd.go:604] all images are preloaded for containerd runtime.
	I0530 20:51:40.765947 2294803 containerd.go:518] Images already preloaded, skipping extraction
	I0530 20:51:40.766004 2294803 ssh_runner.go:195] Run: sudo crictl images --output json
	I0530 20:51:40.809793 2294803 containerd.go:604] all images are preloaded for containerd runtime.
	I0530 20:51:40.809816 2294803 cache_images.go:84] Images are preloaded, skipping loading
	I0530 20:51:40.809873 2294803 ssh_runner.go:195] Run: sudo crictl info
	I0530 20:51:40.855825 2294803 cni.go:84] Creating CNI manager for ""
	I0530 20:51:40.855903 2294803 cni.go:142] "docker" driver + "containerd" runtime found, recommending kindnet
	I0530 20:51:40.855919 2294803 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0530 20:51:40.855939 2294803 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.27.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-084881 NodeName:addons-084881 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0530 20:51:40.856097 2294803 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-084881"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0530 20:51:40.856188 2294803 kubeadm.go:971] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=addons-084881 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.2 ClusterName:addons-084881 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0530 20:51:40.856258 2294803 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.2
	I0530 20:51:40.867908 2294803 binaries.go:44] Found k8s binaries, skipping transfer
	I0530 20:51:40.868025 2294803 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0530 20:51:40.879092 2294803 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (385 bytes)
	I0530 20:51:40.900677 2294803 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0530 20:51:40.923015 2294803 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I0530 20:51:40.944629 2294803 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0530 20:51:40.949055 2294803 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0530 20:51:40.962704 2294803 certs.go:56] Setting up /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881 for IP: 192.168.49.2
	I0530 20:51:40.962788 2294803 certs.go:190] acquiring lock for shared ca certs: {Name:mkef74d64a59002b998e67685a207d5c04604358 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 20:51:40.963533 2294803 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/16597-2288886/.minikube/ca.key
	I0530 20:51:41.200345 2294803 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16597-2288886/.minikube/ca.crt ...
	I0530 20:51:41.200377 2294803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16597-2288886/.minikube/ca.crt: {Name:mk157ef564acd0c19f3f749a6956fe0ffd6ca34f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 20:51:41.200582 2294803 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16597-2288886/.minikube/ca.key ...
	I0530 20:51:41.200597 2294803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16597-2288886/.minikube/ca.key: {Name:mkbb37766b2be36c86ce6ccb5fce2bb07e873688 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 20:51:41.200691 2294803 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/16597-2288886/.minikube/proxy-client-ca.key
	I0530 20:51:41.616153 2294803 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16597-2288886/.minikube/proxy-client-ca.crt ...
	I0530 20:51:41.616187 2294803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16597-2288886/.minikube/proxy-client-ca.crt: {Name:mkda06c826c65b89669d00ec1e9bb63cb71f4c3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 20:51:41.616920 2294803 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16597-2288886/.minikube/proxy-client-ca.key ...
	I0530 20:51:41.616939 2294803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16597-2288886/.minikube/proxy-client-ca.key: {Name:mk34835b7eccc447446ebc8782b3cd9db035a479 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 20:51:41.617695 2294803 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/client.key
	I0530 20:51:41.617716 2294803 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/client.crt with IP's: []
	I0530 20:51:42.044722 2294803 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/client.crt ...
	I0530 20:51:42.044753 2294803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/client.crt: {Name:mkfff1955f843b2ab93c9d364ec4cdb49f2a3b66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 20:51:42.044947 2294803 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/client.key ...
	I0530 20:51:42.044958 2294803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/client.key: {Name:mkd750a607e2f4270367ede56824fe275bc8738c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 20:51:42.045042 2294803 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/apiserver.key.dd3b5fb2
	I0530 20:51:42.045063 2294803 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0530 20:51:42.886407 2294803 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/apiserver.crt.dd3b5fb2 ...
	I0530 20:51:42.886441 2294803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/apiserver.crt.dd3b5fb2: {Name:mkc449d7cc41ec5cfadb7316a2c049c2c4d495ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 20:51:42.886633 2294803 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/apiserver.key.dd3b5fb2 ...
	I0530 20:51:42.886645 2294803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/apiserver.key.dd3b5fb2: {Name:mk86003e889133b0a5a37163e5c9bca6d558b6c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 20:51:42.886724 2294803 certs.go:337] copying /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/apiserver.crt
	I0530 20:51:42.886808 2294803 certs.go:341] copying /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/apiserver.key
	I0530 20:51:42.886859 2294803 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/proxy-client.key
	I0530 20:51:42.886880 2294803 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/proxy-client.crt with IP's: []
	I0530 20:51:43.271559 2294803 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/proxy-client.crt ...
	I0530 20:51:43.271595 2294803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/proxy-client.crt: {Name:mkfaac240443bbd55848e09755e20bac46b2e016 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 20:51:43.272223 2294803 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/proxy-client.key ...
	I0530 20:51:43.272241 2294803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/proxy-client.key: {Name:mk83834dd94f7978004956ac0737c2e5de724936 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 20:51:43.272505 2294803 certs.go:437] found cert: /home/jenkins/minikube-integration/16597-2288886/.minikube/certs/home/jenkins/minikube-integration/16597-2288886/.minikube/certs/ca-key.pem (1675 bytes)
	I0530 20:51:43.272553 2294803 certs.go:437] found cert: /home/jenkins/minikube-integration/16597-2288886/.minikube/certs/home/jenkins/minikube-integration/16597-2288886/.minikube/certs/ca.pem (1078 bytes)
	I0530 20:51:43.272586 2294803 certs.go:437] found cert: /home/jenkins/minikube-integration/16597-2288886/.minikube/certs/home/jenkins/minikube-integration/16597-2288886/.minikube/certs/cert.pem (1123 bytes)
	I0530 20:51:43.272614 2294803 certs.go:437] found cert: /home/jenkins/minikube-integration/16597-2288886/.minikube/certs/home/jenkins/minikube-integration/16597-2288886/.minikube/certs/key.pem (1679 bytes)
	I0530 20:51:43.273383 2294803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0530 20:51:43.303288 2294803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0530 20:51:43.334985 2294803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0530 20:51:43.363387 2294803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0530 20:51:43.391752 2294803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16597-2288886/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0530 20:51:43.420100 2294803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16597-2288886/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0530 20:51:43.450145 2294803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16597-2288886/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0530 20:51:43.479554 2294803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16597-2288886/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0530 20:51:43.508327 2294803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16597-2288886/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0530 20:51:43.539983 2294803 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0530 20:51:43.562906 2294803 ssh_runner.go:195] Run: openssl version
	I0530 20:51:43.570429 2294803 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0530 20:51:43.582784 2294803 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0530 20:51:43.587695 2294803 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 May 30 20:51 /usr/share/ca-certificates/minikubeCA.pem
	I0530 20:51:43.587777 2294803 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0530 20:51:43.596891 2294803 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0530 20:51:43.609599 2294803 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0530 20:51:43.614966 2294803 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0530 20:51:43.615043 2294803 kubeadm.go:404] StartCluster: {Name:addons-084881 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:addons-084881 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0530 20:51:43.615158 2294803 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0530 20:51:43.615264 2294803 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0530 20:51:43.659559 2294803 cri.go:88] found id: ""
	I0530 20:51:43.659665 2294803 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0530 20:51:43.671088 2294803 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0530 20:51:43.682573 2294803 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0530 20:51:43.682651 2294803 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0530 20:51:43.694366 2294803 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0530 20:51:43.694418 2294803 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0530 20:51:43.802980 2294803 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1036-aws\n", err: exit status 1
	I0530 20:51:43.881743 2294803 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0530 20:51:43.882012 2294803 kubeadm.go:322] W0530 20:51:43.881204     894 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0530 20:51:51.233796 2294803 kubeadm.go:322] W0530 20:51:51.232665     894 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0530 20:52:00.728098 2294803 kubeadm.go:322] [init] Using Kubernetes version: v1.27.2
	I0530 20:52:00.728152 2294803 kubeadm.go:322] [preflight] Running pre-flight checks
	I0530 20:52:00.728235 2294803 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0530 20:52:00.728287 2294803 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1036-aws
	I0530 20:52:00.728319 2294803 kubeadm.go:322] OS: Linux
	I0530 20:52:00.728364 2294803 kubeadm.go:322] CGROUPS_CPU: enabled
	I0530 20:52:00.728409 2294803 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0530 20:52:00.728454 2294803 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0530 20:52:00.728499 2294803 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0530 20:52:00.728545 2294803 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0530 20:52:00.728593 2294803 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0530 20:52:00.728636 2294803 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0530 20:52:00.728682 2294803 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0530 20:52:00.728726 2294803 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0530 20:52:00.728794 2294803 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0530 20:52:00.728883 2294803 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0530 20:52:00.728969 2294803 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0530 20:52:00.729029 2294803 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0530 20:52:00.731215 2294803 out.go:204]   - Generating certificates and keys ...
	I0530 20:52:00.731403 2294803 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0530 20:52:00.731513 2294803 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0530 20:52:00.731586 2294803 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0530 20:52:00.731666 2294803 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0530 20:52:00.731727 2294803 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0530 20:52:00.731778 2294803 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0530 20:52:00.731831 2294803 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0530 20:52:00.731948 2294803 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-084881 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0530 20:52:00.732001 2294803 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0530 20:52:00.732117 2294803 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-084881 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0530 20:52:00.732182 2294803 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0530 20:52:00.732247 2294803 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0530 20:52:00.732294 2294803 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0530 20:52:00.732351 2294803 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0530 20:52:00.732404 2294803 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0530 20:52:00.732458 2294803 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0530 20:52:00.732522 2294803 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0530 20:52:00.732577 2294803 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0530 20:52:00.732681 2294803 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0530 20:52:00.732766 2294803 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0530 20:52:00.732806 2294803 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0530 20:52:00.732873 2294803 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0530 20:52:00.735381 2294803 out.go:204]   - Booting up control plane ...
	I0530 20:52:00.735484 2294803 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0530 20:52:00.735557 2294803 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0530 20:52:00.735620 2294803 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0530 20:52:00.735696 2294803 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0530 20:52:00.735841 2294803 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0530 20:52:00.735912 2294803 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002052 seconds
	I0530 20:52:00.736012 2294803 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0530 20:52:00.736128 2294803 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0530 20:52:00.736183 2294803 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0530 20:52:00.736352 2294803 kubeadm.go:322] [mark-control-plane] Marking the node addons-084881 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0530 20:52:00.736404 2294803 kubeadm.go:322] [bootstrap-token] Using token: m1twdw.2d8g3p9wgkp7ep6a
	I0530 20:52:00.738507 2294803 out.go:204]   - Configuring RBAC rules ...
	I0530 20:52:00.738735 2294803 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0530 20:52:00.738863 2294803 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0530 20:52:00.739064 2294803 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0530 20:52:00.739232 2294803 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0530 20:52:00.739383 2294803 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0530 20:52:00.739540 2294803 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0530 20:52:00.739702 2294803 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0530 20:52:00.739776 2294803 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0530 20:52:00.739857 2294803 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0530 20:52:00.739894 2294803 kubeadm.go:322] 
	I0530 20:52:00.739983 2294803 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0530 20:52:00.740005 2294803 kubeadm.go:322] 
	I0530 20:52:00.740134 2294803 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0530 20:52:00.740171 2294803 kubeadm.go:322] 
	I0530 20:52:00.740212 2294803 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0530 20:52:00.740313 2294803 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0530 20:52:00.740403 2294803 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0530 20:52:00.740428 2294803 kubeadm.go:322] 
	I0530 20:52:00.740521 2294803 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0530 20:52:00.740529 2294803 kubeadm.go:322] 
	I0530 20:52:00.740591 2294803 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0530 20:52:00.740596 2294803 kubeadm.go:322] 
	I0530 20:52:00.740649 2294803 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0530 20:52:00.740725 2294803 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0530 20:52:00.740794 2294803 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0530 20:52:00.740798 2294803 kubeadm.go:322] 
	I0530 20:52:00.740883 2294803 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0530 20:52:00.740960 2294803 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0530 20:52:00.740965 2294803 kubeadm.go:322] 
	I0530 20:52:00.741052 2294803 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token m1twdw.2d8g3p9wgkp7ep6a \
	I0530 20:52:00.741156 2294803 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ff077636a4006c51f7456795481b97b5286c2b636cefd4a65a893c56dd417d66 \
	I0530 20:52:00.741177 2294803 kubeadm.go:322] 	--control-plane 
	I0530 20:52:00.741181 2294803 kubeadm.go:322] 
	I0530 20:52:00.741266 2294803 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0530 20:52:00.741270 2294803 kubeadm.go:322] 
	I0530 20:52:00.741392 2294803 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token m1twdw.2d8g3p9wgkp7ep6a \
	I0530 20:52:00.741515 2294803 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ff077636a4006c51f7456795481b97b5286c2b636cefd4a65a893c56dd417d66 
	I0530 20:52:00.741524 2294803 cni.go:84] Creating CNI manager for ""
	I0530 20:52:00.741531 2294803 cni.go:142] "docker" driver + "containerd" runtime found, recommending kindnet
	I0530 20:52:00.743765 2294803 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0530 20:52:00.745475 2294803 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0530 20:52:00.752632 2294803 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.27.2/kubectl ...
	I0530 20:52:00.752649 2294803 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0530 20:52:00.803697 2294803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0530 20:52:01.745358 2294803 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0530 20:52:01.745494 2294803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 20:52:01.745571 2294803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=6d0d5d534b34391ed9438fcde26494d33a798fae minikube.k8s.io/name=addons-084881 minikube.k8s.io/updated_at=2023_05_30T20_52_01_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 20:52:01.895549 2294803 ops.go:34] apiserver oom_adj: -16
	I0530 20:52:01.895642 2294803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 20:52:02.551470 2294803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 20:52:03.051013 2294803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 20:52:03.551640 2294803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 20:52:04.051575 2294803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 20:52:04.551213 2294803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 20:52:05.050967 2294803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 20:52:05.551501 2294803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 20:52:06.050989 2294803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 20:52:06.551904 2294803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 20:52:07.051083 2294803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 20:52:07.551854 2294803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 20:52:08.050936 2294803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 20:52:08.551055 2294803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 20:52:09.051944 2294803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 20:52:09.551486 2294803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 20:52:10.051711 2294803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 20:52:10.551802 2294803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 20:52:11.051165 2294803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 20:52:11.551059 2294803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 20:52:12.051087 2294803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 20:52:12.551834 2294803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 20:52:13.050958 2294803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 20:52:13.298792 2294803 kubeadm.go:1076] duration metric: took 11.553347185s to wait for elevateKubeSystemPrivileges.
	I0530 20:52:13.298821 2294803 kubeadm.go:406] StartCluster complete in 29.683781643s
	I0530 20:52:13.298839 2294803 settings.go:142] acquiring lock: {Name:mkdbeb66ef6240a2ca39c4b606ba49055796e4d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 20:52:13.298959 2294803 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16597-2288886/kubeconfig
	I0530 20:52:13.299332 2294803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16597-2288886/kubeconfig: {Name:mk0fdfd8357f1362eedcc9930d50aa3f3a348d95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 20:52:13.301965 2294803 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0530 20:52:13.301979 2294803 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0530 20:52:13.302060 2294803 addons.go:66] Setting volumesnapshots=true in profile "addons-084881"
	I0530 20:52:13.302074 2294803 addons.go:228] Setting addon volumesnapshots=true in "addons-084881"
	I0530 20:52:13.302113 2294803 host.go:66] Checking if "addons-084881" exists ...
	I0530 20:52:13.302124 2294803 addons.go:66] Setting gcp-auth=true in profile "addons-084881"
	I0530 20:52:13.302148 2294803 mustload.go:65] Loading cluster: addons-084881
	I0530 20:52:13.302371 2294803 config.go:182] Loaded profile config "addons-084881": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.2
	I0530 20:52:13.302583 2294803 cli_runner.go:164] Run: docker container inspect addons-084881 --format={{.State.Status}}
	I0530 20:52:13.302631 2294803 cli_runner.go:164] Run: docker container inspect addons-084881 --format={{.State.Status}}
	I0530 20:52:13.302849 2294803 config.go:182] Loaded profile config "addons-084881": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.2
	I0530 20:52:13.302880 2294803 addons.go:66] Setting cloud-spanner=true in profile "addons-084881"
	I0530 20:52:13.302889 2294803 addons.go:228] Setting addon cloud-spanner=true in "addons-084881"
	I0530 20:52:13.302920 2294803 host.go:66] Checking if "addons-084881" exists ...
	I0530 20:52:13.303335 2294803 cli_runner.go:164] Run: docker container inspect addons-084881 --format={{.State.Status}}
	I0530 20:52:13.303430 2294803 addons.go:66] Setting csi-hostpath-driver=true in profile "addons-084881"
	I0530 20:52:13.303456 2294803 addons.go:228] Setting addon csi-hostpath-driver=true in "addons-084881"
	I0530 20:52:13.303485 2294803 host.go:66] Checking if "addons-084881" exists ...
	I0530 20:52:13.303924 2294803 cli_runner.go:164] Run: docker container inspect addons-084881 --format={{.State.Status}}
	I0530 20:52:13.303996 2294803 addons.go:66] Setting default-storageclass=true in profile "addons-084881"
	I0530 20:52:13.304014 2294803 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-084881"
	I0530 20:52:13.304264 2294803 cli_runner.go:164] Run: docker container inspect addons-084881 --format={{.State.Status}}
	I0530 20:52:13.304842 2294803 addons.go:66] Setting inspektor-gadget=true in profile "addons-084881"
	I0530 20:52:13.304860 2294803 addons.go:228] Setting addon inspektor-gadget=true in "addons-084881"
	I0530 20:52:13.304899 2294803 host.go:66] Checking if "addons-084881" exists ...
	I0530 20:52:13.305449 2294803 cli_runner.go:164] Run: docker container inspect addons-084881 --format={{.State.Status}}
	I0530 20:52:13.305604 2294803 addons.go:66] Setting ingress=true in profile "addons-084881"
	I0530 20:52:13.305646 2294803 addons.go:228] Setting addon ingress=true in "addons-084881"
	I0530 20:52:13.305736 2294803 host.go:66] Checking if "addons-084881" exists ...
	I0530 20:52:13.306281 2294803 cli_runner.go:164] Run: docker container inspect addons-084881 --format={{.State.Status}}
	I0530 20:52:13.306459 2294803 addons.go:66] Setting ingress-dns=true in profile "addons-084881"
	I0530 20:52:13.306516 2294803 addons.go:228] Setting addon ingress-dns=true in "addons-084881"
	I0530 20:52:13.306602 2294803 host.go:66] Checking if "addons-084881" exists ...
	I0530 20:52:13.307159 2294803 cli_runner.go:164] Run: docker container inspect addons-084881 --format={{.State.Status}}
	I0530 20:52:13.307422 2294803 addons.go:66] Setting registry=true in profile "addons-084881"
	I0530 20:52:13.307468 2294803 addons.go:228] Setting addon registry=true in "addons-084881"
	I0530 20:52:13.307529 2294803 host.go:66] Checking if "addons-084881" exists ...
	I0530 20:52:13.308116 2294803 cli_runner.go:164] Run: docker container inspect addons-084881 --format={{.State.Status}}
	I0530 20:52:13.308265 2294803 addons.go:66] Setting metrics-server=true in profile "addons-084881"
	I0530 20:52:13.308314 2294803 addons.go:228] Setting addon metrics-server=true in "addons-084881"
	I0530 20:52:13.308386 2294803 host.go:66] Checking if "addons-084881" exists ...
	I0530 20:52:13.308619 2294803 addons.go:66] Setting storage-provisioner=true in profile "addons-084881"
	I0530 20:52:13.308641 2294803 addons.go:228] Setting addon storage-provisioner=true in "addons-084881"
	I0530 20:52:13.308675 2294803 host.go:66] Checking if "addons-084881" exists ...
	I0530 20:52:13.325888 2294803 cli_runner.go:164] Run: docker container inspect addons-084881 --format={{.State.Status}}
	I0530 20:52:13.350734 2294803 cli_runner.go:164] Run: docker container inspect addons-084881 --format={{.State.Status}}
	I0530 20:52:13.480651 2294803 host.go:66] Checking if "addons-084881" exists ...
	I0530 20:52:13.504247 2294803 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.5
	I0530 20:52:13.533814 2294803 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0530 20:52:13.533909 2294803 addons.go:420] installing /etc/kubernetes/addons/deployment.yaml
	I0530 20:52:13.541030 2294803 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0530 20:52:13.541047 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0530 20:52:13.541048 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1003 bytes)
	I0530 20:52:13.541117 2294803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-084881
	I0530 20:52:13.541125 2294803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-084881
	I0530 20:52:13.550974 2294803 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0530 20:52:13.556748 2294803 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0530 20:52:13.556774 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0530 20:52:13.556842 2294803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-084881
	I0530 20:52:13.578962 2294803 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0530 20:52:13.581177 2294803 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0530 20:52:13.584959 2294803 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0530 20:52:13.586659 2294803 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0530 20:52:13.593295 2294803 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0530 20:52:13.595113 2294803 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.3
	I0530 20:52:13.596806 2294803 addons.go:420] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0530 20:52:13.596832 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0530 20:52:13.596907 2294803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-084881
	I0530 20:52:13.601428 2294803 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0530 20:52:13.603514 2294803 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0530 20:52:13.609732 2294803 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0530 20:52:13.611564 2294803 addons.go:420] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0530 20:52:13.611594 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0530 20:52:13.611688 2294803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-084881
	I0530 20:52:13.621884 2294803 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.16.1
	I0530 20:52:13.624307 2294803 addons.go:420] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0530 20:52:13.624334 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0530 20:52:13.624417 2294803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-084881
	I0530 20:52:13.679339 2294803 out.go:177]   - Using image gcr.io/google_containers/kube-registry-proxy:0.4
	I0530 20:52:13.681556 2294803 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0530 20:52:13.687142 2294803 addons.go:420] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0530 20:52:13.687171 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0530 20:52:13.687243 2294803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-084881
	I0530 20:52:13.697535 2294803 out.go:177]   - Using image docker.io/registry:2.8.1
	I0530 20:52:13.699773 2294803 addons.go:420] installing /etc/kubernetes/addons/registry-rc.yaml
	I0530 20:52:13.699803 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0530 20:52:13.699887 2294803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-084881
	I0530 20:52:13.701021 2294803 addons.go:228] Setting addon default-storageclass=true in "addons-084881"
	I0530 20:52:13.701063 2294803 host.go:66] Checking if "addons-084881" exists ...
	I0530 20:52:13.701617 2294803 cli_runner.go:164] Run: docker container inspect addons-084881 --format={{.State.Status}}
	I0530 20:52:13.706273 2294803 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230312-helm-chart-4.5.2-28-g66a760794
	I0530 20:52:13.711964 2294803 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.7.0
	I0530 20:52:13.713894 2294803 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230312-helm-chart-4.5.2-28-g66a760794
	I0530 20:52:13.716112 2294803 addons.go:420] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0530 20:52:13.716144 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16145 bytes)
	I0530 20:52:13.716231 2294803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-084881
	I0530 20:52:13.773489 2294803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40946 SSHKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/machines/addons-084881/id_rsa Username:docker}
	I0530 20:52:13.812268 2294803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40946 SSHKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/machines/addons-084881/id_rsa Username:docker}
	I0530 20:52:13.853412 2294803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40946 SSHKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/machines/addons-084881/id_rsa Username:docker}
	I0530 20:52:13.893411 2294803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40946 SSHKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/machines/addons-084881/id_rsa Username:docker}
	I0530 20:52:13.894797 2294803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40946 SSHKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/machines/addons-084881/id_rsa Username:docker}
	I0530 20:52:13.900678 2294803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40946 SSHKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/machines/addons-084881/id_rsa Username:docker}
	I0530 20:52:13.911893 2294803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40946 SSHKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/machines/addons-084881/id_rsa Username:docker}
	I0530 20:52:13.937409 2294803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40946 SSHKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/machines/addons-084881/id_rsa Username:docker}
	I0530 20:52:13.957420 2294803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40946 SSHKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/machines/addons-084881/id_rsa Username:docker}
	I0530 20:52:13.962557 2294803 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0530 20:52:13.962577 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0530 20:52:13.962641 2294803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-084881
	I0530 20:52:13.991885 2294803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40946 SSHKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/machines/addons-084881/id_rsa Username:docker}
	I0530 20:52:14.152433 2294803 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-084881" context rescaled to 1 replicas
	I0530 20:52:14.152475 2294803 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0530 20:52:14.155465 2294803 out.go:177] * Verifying Kubernetes components...
	I0530 20:52:14.158105 2294803 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0530 20:52:14.216696 2294803 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0530 20:52:14.472188 2294803 node_ready.go:35] waiting up to 6m0s for node "addons-084881" to be "Ready" ...
	I0530 20:52:14.476262 2294803 node_ready.go:49] node "addons-084881" has status "Ready":"True"
	I0530 20:52:14.476290 2294803 node_ready.go:38] duration metric: took 4.072056ms waiting for node "addons-084881" to be "Ready" ...
	I0530 20:52:14.476301 2294803 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0530 20:52:14.490748 2294803 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-ksg2p" in "kube-system" namespace to be "Ready" ...
	I0530 20:52:14.603994 2294803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0530 20:52:14.607091 2294803 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0530 20:52:14.607113 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0530 20:52:14.687620 2294803 addons.go:420] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0530 20:52:14.687684 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0530 20:52:14.699006 2294803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0530 20:52:14.733985 2294803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0530 20:52:14.739726 2294803 addons.go:420] installing /etc/kubernetes/addons/registry-svc.yaml
	I0530 20:52:14.739800 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0530 20:52:14.746958 2294803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0530 20:52:14.751108 2294803 addons.go:420] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0530 20:52:14.751199 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0530 20:52:14.981349 2294803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0530 20:52:14.986488 2294803 addons.go:420] installing /etc/kubernetes/addons/ig-role.yaml
	I0530 20:52:14.986513 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0530 20:52:14.999104 2294803 addons.go:420] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0530 20:52:14.999134 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0530 20:52:15.055916 2294803 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0530 20:52:15.055945 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0530 20:52:15.065021 2294803 addons.go:420] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0530 20:52:15.065047 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (950 bytes)
	I0530 20:52:15.086353 2294803 addons.go:420] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0530 20:52:15.086380 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0530 20:52:15.253477 2294803 addons.go:420] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0530 20:52:15.253501 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0530 20:52:15.258264 2294803 addons.go:420] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0530 20:52:15.258289 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0530 20:52:15.321921 2294803 addons.go:420] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0530 20:52:15.321947 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0530 20:52:15.325536 2294803 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0530 20:52:15.325560 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0530 20:52:15.340794 2294803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0530 20:52:15.402473 2294803 addons.go:420] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0530 20:52:15.402498 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0530 20:52:15.440782 2294803 addons.go:420] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0530 20:52:15.440805 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0530 20:52:15.592976 2294803 addons.go:420] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0530 20:52:15.593001 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0530 20:52:15.607532 2294803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0530 20:52:15.619506 2294803 addons.go:420] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0530 20:52:15.619539 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0530 20:52:15.622530 2294803 addons.go:420] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0530 20:52:15.622554 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0530 20:52:15.743458 2294803 addons.go:420] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0530 20:52:15.743565 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0530 20:52:15.807427 2294803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0530 20:52:15.812093 2294803 addons.go:420] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0530 20:52:15.812118 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0530 20:52:16.063888 2294803 addons.go:420] installing /etc/kubernetes/addons/ig-crd.yaml
	I0530 20:52:16.063916 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0530 20:52:16.088264 2294803 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0530 20:52:16.088289 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0530 20:52:16.242528 2294803 addons.go:420] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0530 20:52:16.242553 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I0530 20:52:16.330106 2294803 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0530 20:52:16.330190 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0530 20:52:16.400648 2294803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0530 20:52:16.528965 2294803 pod_ready.go:102] pod "coredns-5d78c9869d-ksg2p" in "kube-system" namespace has status "Ready":"False"
	I0530 20:52:16.573169 2294803 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0530 20:52:16.573242 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0530 20:52:16.714853 2294803 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0530 20:52:16.714880 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0530 20:52:16.741349 2294803 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.524579689s)
	I0530 20:52:16.741427 2294803 start.go:916] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0530 20:52:16.891807 2294803 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0530 20:52:16.891877 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0530 20:52:17.113926 2294803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0530 20:52:17.789091 2294803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.185063182s)
	I0530 20:52:17.789151 2294803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.090081613s)
	I0530 20:52:17.789177 2294803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.05512654s)
	I0530 20:52:19.056120 2294803 pod_ready.go:102] pod "coredns-5d78c9869d-ksg2p" in "kube-system" namespace has status "Ready":"False"
	I0530 20:52:20.311620 2294803 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0530 20:52:20.311744 2294803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-084881
	I0530 20:52:20.342912 2294803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40946 SSHKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/machines/addons-084881/id_rsa Username:docker}
	I0530 20:52:20.644972 2294803 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0530 20:52:20.686332 2294803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.939281446s)
	I0530 20:52:20.686413 2294803 addons.go:464] Verifying addon ingress=true in "addons-084881"
	I0530 20:52:20.688702 2294803 out.go:177] * Verifying ingress addon...
	I0530 20:52:20.686775 2294803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.705271159s)
	I0530 20:52:20.686819 2294803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.345997789s)
	I0530 20:52:20.686892 2294803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.079332528s)
	I0530 20:52:20.686988 2294803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.87953021s)
	I0530 20:52:20.687055 2294803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.286316477s)
	I0530 20:52:20.688892 2294803 addons.go:464] Verifying addon registry=true in "addons-084881"
	I0530 20:52:20.688901 2294803 addons.go:464] Verifying addon metrics-server=true in "addons-084881"
	W0530 20:52:20.689029 2294803 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0530 20:52:20.692471 2294803 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0530 20:52:20.694127 2294803 out.go:177] * Verifying registry addon...
	I0530 20:52:20.696965 2294803 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0530 20:52:20.694213 2294803 retry.go:31] will retry after 322.526505ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0530 20:52:20.699113 2294803 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0530 20:52:20.699133 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:20.704447 2294803 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0530 20:52:20.704468 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:20.758037 2294803 addons.go:228] Setting addon gcp-auth=true in "addons-084881"
	I0530 20:52:20.758136 2294803 host.go:66] Checking if "addons-084881" exists ...
	I0530 20:52:20.758682 2294803 cli_runner.go:164] Run: docker container inspect addons-084881 --format={{.State.Status}}
	I0530 20:52:20.788210 2294803 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0530 20:52:20.788261 2294803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-084881
	I0530 20:52:20.818410 2294803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40946 SSHKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/machines/addons-084881/id_rsa Username:docker}
	I0530 20:52:21.019915 2294803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0530 20:52:21.204270 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:21.210275 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:21.554030 2294803 pod_ready.go:102] pod "coredns-5d78c9869d-ksg2p" in "kube-system" namespace has status "Ready":"False"
	I0530 20:52:21.706308 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:21.717424 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:22.203848 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:22.211648 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:22.774301 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:22.774978 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:22.907344 2294803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.793367003s)
	I0530 20:52:22.907390 2294803 addons.go:464] Verifying addon csi-hostpath-driver=true in "addons-084881"
	I0530 20:52:22.910180 2294803 out.go:177] * Verifying csi-hostpath-driver addon...
	I0530 20:52:22.907763 2294803 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.119440628s)
	I0530 20:52:22.914978 2294803 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230312-helm-chart-4.5.2-28-g66a760794
	I0530 20:52:22.913624 2294803 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0530 20:52:22.919337 2294803 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0530 20:52:22.921364 2294803 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0530 20:52:22.921388 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0530 20:52:22.935202 2294803 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0530 20:52:22.935235 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:23.000476 2294803 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0530 20:52:23.000515 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0530 20:52:23.189501 2294803 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0530 20:52:23.189528 2294803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5474 bytes)
	I0530 20:52:23.204055 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:23.209762 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:23.219635 2294803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.199659782s)
	I0530 20:52:23.236919 2294803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0530 20:52:23.441809 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:23.704485 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:23.709800 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:23.942934 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:24.051781 2294803 pod_ready.go:102] pod "coredns-5d78c9869d-ksg2p" in "kube-system" namespace has status "Ready":"False"
	I0530 20:52:24.108969 2294803 addons.go:464] Verifying addon gcp-auth=true in "addons-084881"
	I0530 20:52:24.111686 2294803 out.go:177] * Verifying gcp-auth addon...
	I0530 20:52:24.120401 2294803 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0530 20:52:24.130460 2294803 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0530 20:52:24.130522 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:24.204392 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:24.210525 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:24.442756 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:24.634493 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:24.704936 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:24.710548 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:24.942849 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:25.136636 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:25.204244 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:25.209979 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:25.441823 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:25.634654 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:25.705098 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:25.710350 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:25.941877 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:26.135420 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:26.207597 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:26.213516 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:26.442226 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:26.517208 2294803 pod_ready.go:102] pod "coredns-5d78c9869d-ksg2p" in "kube-system" namespace has status "Ready":"False"
	I0530 20:52:26.635963 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:26.705367 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:26.711133 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:26.941743 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:27.134749 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:27.216209 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:27.225860 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:27.442129 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:27.515860 2294803 pod_ready.go:92] pod "coredns-5d78c9869d-ksg2p" in "kube-system" namespace has status "Ready":"True"
	I0530 20:52:27.515886 2294803 pod_ready.go:81] duration metric: took 13.025106059s waiting for pod "coredns-5d78c9869d-ksg2p" in "kube-system" namespace to be "Ready" ...
	I0530 20:52:27.515897 2294803 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-mrcpc" in "kube-system" namespace to be "Ready" ...
	I0530 20:52:27.518444 2294803 pod_ready.go:97] error getting pod "coredns-5d78c9869d-mrcpc" in "kube-system" namespace (skipping!): pods "coredns-5d78c9869d-mrcpc" not found
	I0530 20:52:27.518472 2294803 pod_ready.go:81] duration metric: took 2.568692ms waiting for pod "coredns-5d78c9869d-mrcpc" in "kube-system" namespace to be "Ready" ...
	E0530 20:52:27.518483 2294803 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5d78c9869d-mrcpc" in "kube-system" namespace (skipping!): pods "coredns-5d78c9869d-mrcpc" not found
	I0530 20:52:27.518493 2294803 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-084881" in "kube-system" namespace to be "Ready" ...
	I0530 20:52:27.524999 2294803 pod_ready.go:92] pod "etcd-addons-084881" in "kube-system" namespace has status "Ready":"True"
	I0530 20:52:27.525024 2294803 pod_ready.go:81] duration metric: took 6.523751ms waiting for pod "etcd-addons-084881" in "kube-system" namespace to be "Ready" ...
	I0530 20:52:27.525040 2294803 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-084881" in "kube-system" namespace to be "Ready" ...
	I0530 20:52:27.532456 2294803 pod_ready.go:92] pod "kube-apiserver-addons-084881" in "kube-system" namespace has status "Ready":"True"
	I0530 20:52:27.532479 2294803 pod_ready.go:81] duration metric: took 7.431773ms waiting for pod "kube-apiserver-addons-084881" in "kube-system" namespace to be "Ready" ...
	I0530 20:52:27.532489 2294803 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-084881" in "kube-system" namespace to be "Ready" ...
	I0530 20:52:27.540934 2294803 pod_ready.go:92] pod "kube-controller-manager-addons-084881" in "kube-system" namespace has status "Ready":"True"
	I0530 20:52:27.540960 2294803 pod_ready.go:81] duration metric: took 8.460555ms waiting for pod "kube-controller-manager-addons-084881" in "kube-system" namespace to be "Ready" ...
	I0530 20:52:27.540973 2294803 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-427l8" in "kube-system" namespace to be "Ready" ...
	I0530 20:52:27.634226 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:27.703899 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:27.709811 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:27.712834 2294803 pod_ready.go:92] pod "kube-proxy-427l8" in "kube-system" namespace has status "Ready":"True"
	I0530 20:52:27.712856 2294803 pod_ready.go:81] duration metric: took 171.876453ms waiting for pod "kube-proxy-427l8" in "kube-system" namespace to be "Ready" ...
	I0530 20:52:27.712867 2294803 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-084881" in "kube-system" namespace to be "Ready" ...
	I0530 20:52:27.943829 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:28.113980 2294803 pod_ready.go:92] pod "kube-scheduler-addons-084881" in "kube-system" namespace has status "Ready":"True"
	I0530 20:52:28.114002 2294803 pod_ready.go:81] duration metric: took 401.128341ms waiting for pod "kube-scheduler-addons-084881" in "kube-system" namespace to be "Ready" ...
	I0530 20:52:28.114012 2294803 pod_ready.go:38] duration metric: took 13.637701546s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0530 20:52:28.114048 2294803 api_server.go:52] waiting for apiserver process to appear ...
	I0530 20:52:28.114119 2294803 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0530 20:52:28.135654 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:28.138141 2294803 api_server.go:72] duration metric: took 13.985613931s to wait for apiserver process to appear ...
	I0530 20:52:28.138166 2294803 api_server.go:88] waiting for apiserver healthz status ...
	I0530 20:52:28.138186 2294803 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0530 20:52:28.147439 2294803 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0530 20:52:28.149078 2294803 api_server.go:141] control plane version: v1.27.2
	I0530 20:52:28.149138 2294803 api_server.go:131] duration metric: took 10.964755ms to wait for apiserver health ...
	I0530 20:52:28.149161 2294803 system_pods.go:43] waiting for kube-system pods to appear ...
	I0530 20:52:28.204853 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:28.209526 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:28.321292 2294803 system_pods.go:59] 17 kube-system pods found
	I0530 20:52:28.321496 2294803 system_pods.go:61] "coredns-5d78c9869d-ksg2p" [f9a8cda2-9fe0-4a90-b6f2-942e2dcd3627] Running
	I0530 20:52:28.321506 2294803 system_pods.go:61] "csi-hostpath-attacher-0" [30f584e7-8b32-45c8-acd4-d376356a7976] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0530 20:52:28.321517 2294803 system_pods.go:61] "csi-hostpath-resizer-0" [6f3ba8c9-8d8d-4d9f-b9c4-cde2c7580e06] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0530 20:52:28.321527 2294803 system_pods.go:61] "csi-hostpathplugin-rlhv5" [f31d867d-34d9-4778-9c25-f97506b185c4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0530 20:52:28.321539 2294803 system_pods.go:61] "etcd-addons-084881" [b5859865-ca58-40b0-b6bf-bcb22c22680b] Running
	I0530 20:52:28.321545 2294803 system_pods.go:61] "kindnet-rfjr4" [d446764b-ed16-41f6-b467-9265e9e62df5] Running
	I0530 20:52:28.321553 2294803 system_pods.go:61] "kube-apiserver-addons-084881" [ab71c754-cafc-494d-ae44-6a3fecd7f1dc] Running
	I0530 20:52:28.321559 2294803 system_pods.go:61] "kube-controller-manager-addons-084881" [30678b6d-fc50-4927-9eec-4dff4fcd73c6] Running
	I0530 20:52:28.321565 2294803 system_pods.go:61] "kube-ingress-dns-minikube" [2cc16a5d-5d03-4b00-bbf5-84737deefcd5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0530 20:52:28.321575 2294803 system_pods.go:61] "kube-proxy-427l8" [c25d4286-4618-4eca-bc9d-da963349be52] Running
	I0530 20:52:28.321580 2294803 system_pods.go:61] "kube-scheduler-addons-084881" [65663488-1806-4ad2-81a2-bedccf0bde50] Running
	I0530 20:52:28.321587 2294803 system_pods.go:61] "metrics-server-844d8db974-l29tb" [2f3ce57c-fd89-420e-863b-e5b166ccdb49] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0530 20:52:28.321599 2294803 system_pods.go:61] "registry-j74nb" [80ac9d6c-8eb3-46f0-b2fd-9d0e5d032568] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0530 20:52:28.321607 2294803 system_pods.go:61] "registry-proxy-d7x5f" [f4513549-bfa8-495e-8b35-eee656d4eb84] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0530 20:52:28.321617 2294803 system_pods.go:61] "snapshot-controller-75bbb956b9-7w6s9" [2fe5b3f5-6c7e-49e0-acf2-4d2af6038490] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0530 20:52:28.321625 2294803 system_pods.go:61] "snapshot-controller-75bbb956b9-f5zmq" [df4c40b4-4a0e-49c0-b7dc-6038c24eee2a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0530 20:52:28.321631 2294803 system_pods.go:61] "storage-provisioner" [b9a2f28a-53c6-47f5-9a9c-44bfb2616c7c] Running
	I0530 20:52:28.321636 2294803 system_pods.go:74] duration metric: took 172.459169ms to wait for pod list to return data ...
	I0530 20:52:28.321644 2294803 default_sa.go:34] waiting for default service account to be created ...
	I0530 20:52:28.441262 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:28.513685 2294803 default_sa.go:45] found service account: "default"
	I0530 20:52:28.513708 2294803 default_sa.go:55] duration metric: took 192.05909ms for default service account to be created ...
	I0530 20:52:28.513719 2294803 system_pods.go:116] waiting for k8s-apps to be running ...
	I0530 20:52:28.634602 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:28.704293 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:28.709898 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:28.719835 2294803 system_pods.go:86] 17 kube-system pods found
	I0530 20:52:28.719915 2294803 system_pods.go:89] "coredns-5d78c9869d-ksg2p" [f9a8cda2-9fe0-4a90-b6f2-942e2dcd3627] Running
	I0530 20:52:28.719934 2294803 system_pods.go:89] "csi-hostpath-attacher-0" [30f584e7-8b32-45c8-acd4-d376356a7976] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0530 20:52:28.719944 2294803 system_pods.go:89] "csi-hostpath-resizer-0" [6f3ba8c9-8d8d-4d9f-b9c4-cde2c7580e06] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0530 20:52:28.719956 2294803 system_pods.go:89] "csi-hostpathplugin-rlhv5" [f31d867d-34d9-4778-9c25-f97506b185c4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0530 20:52:28.719966 2294803 system_pods.go:89] "etcd-addons-084881" [b5859865-ca58-40b0-b6bf-bcb22c22680b] Running
	I0530 20:52:28.719974 2294803 system_pods.go:89] "kindnet-rfjr4" [d446764b-ed16-41f6-b467-9265e9e62df5] Running
	I0530 20:52:28.719982 2294803 system_pods.go:89] "kube-apiserver-addons-084881" [ab71c754-cafc-494d-ae44-6a3fecd7f1dc] Running
	I0530 20:52:28.719988 2294803 system_pods.go:89] "kube-controller-manager-addons-084881" [30678b6d-fc50-4927-9eec-4dff4fcd73c6] Running
	I0530 20:52:28.720003 2294803 system_pods.go:89] "kube-ingress-dns-minikube" [2cc16a5d-5d03-4b00-bbf5-84737deefcd5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0530 20:52:28.720011 2294803 system_pods.go:89] "kube-proxy-427l8" [c25d4286-4618-4eca-bc9d-da963349be52] Running
	I0530 20:52:28.720019 2294803 system_pods.go:89] "kube-scheduler-addons-084881" [65663488-1806-4ad2-81a2-bedccf0bde50] Running
	I0530 20:52:28.720027 2294803 system_pods.go:89] "metrics-server-844d8db974-l29tb" [2f3ce57c-fd89-420e-863b-e5b166ccdb49] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0530 20:52:28.720037 2294803 system_pods.go:89] "registry-j74nb" [80ac9d6c-8eb3-46f0-b2fd-9d0e5d032568] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0530 20:52:28.720045 2294803 system_pods.go:89] "registry-proxy-d7x5f" [f4513549-bfa8-495e-8b35-eee656d4eb84] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0530 20:52:28.720053 2294803 system_pods.go:89] "snapshot-controller-75bbb956b9-7w6s9" [2fe5b3f5-6c7e-49e0-acf2-4d2af6038490] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0530 20:52:28.720063 2294803 system_pods.go:89] "snapshot-controller-75bbb956b9-f5zmq" [df4c40b4-4a0e-49c0-b7dc-6038c24eee2a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0530 20:52:28.720069 2294803 system_pods.go:89] "storage-provisioner" [b9a2f28a-53c6-47f5-9a9c-44bfb2616c7c] Running
	I0530 20:52:28.720082 2294803 system_pods.go:126] duration metric: took 206.35743ms to wait for k8s-apps to be running ...
	I0530 20:52:28.720091 2294803 system_svc.go:44] waiting for kubelet service to be running ....
	I0530 20:52:28.720151 2294803 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0530 20:52:28.736989 2294803 system_svc.go:56] duration metric: took 16.889046ms WaitForService to wait for kubelet.
	I0530 20:52:28.737018 2294803 kubeadm.go:581] duration metric: took 14.584496765s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0530 20:52:28.737051 2294803 node_conditions.go:102] verifying NodePressure condition ...
	I0530 20:52:28.915820 2294803 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0530 20:52:28.915852 2294803 node_conditions.go:123] node cpu capacity is 2
	I0530 20:52:28.915865 2294803 node_conditions.go:105] duration metric: took 178.807955ms to run NodePressure ...
	I0530 20:52:28.915903 2294803 start.go:228] waiting for startup goroutines ...
	I0530 20:52:28.942254 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:29.135112 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:29.204878 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:29.209810 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:29.442942 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:29.635238 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:29.703764 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:29.709647 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:29.941996 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:30.134857 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:30.213434 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:30.214677 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:30.441435 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:30.634483 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:30.704864 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:30.710186 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:30.941399 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:31.134593 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:31.203946 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:31.209524 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:31.441387 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:31.634217 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:31.704283 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:31.713197 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:31.940608 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:32.134640 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:32.203811 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:32.209459 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:32.441090 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:32.638501 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:32.704693 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:32.709680 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:32.942494 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:33.138331 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:33.216512 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:33.223692 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:33.443214 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:33.636219 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:33.707355 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:33.718727 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:33.941967 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:34.135199 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:34.206206 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:34.216166 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:34.442755 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:34.635312 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:34.706201 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:34.712471 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:34.946808 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:35.137658 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:35.207679 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:35.215256 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:35.447643 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:35.634787 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:35.705071 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:35.713443 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:35.942928 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:36.135246 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:36.204714 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:36.214091 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:36.442007 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:36.635301 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:36.704040 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:36.711409 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:36.940839 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:37.134713 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:37.204965 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:37.210018 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:37.440832 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:37.634480 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:37.715779 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:37.723035 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:37.946849 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:38.135137 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:38.204697 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:38.209516 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:38.441632 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:38.635292 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:38.704508 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:38.718055 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:38.942034 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:39.137024 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:39.205191 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:39.211041 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0530 20:52:39.443632 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:39.636671 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:39.711213 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:39.713592 2294803 kapi.go:107] duration metric: took 19.016627311s to wait for kubernetes.io/minikube-addons=registry ...
	I0530 20:52:39.945383 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:40.143760 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:40.204933 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:40.442556 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:40.635699 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:40.705568 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:40.942674 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:41.135196 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:41.203833 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:41.442519 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:41.639997 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:41.705138 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:41.941899 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:42.135097 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:42.204046 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:42.443114 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:42.635028 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:42.703948 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:42.941745 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:43.139013 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:43.226375 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:43.442112 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:43.647259 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:43.704062 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:43.941357 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:44.134514 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:44.206809 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:44.442550 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:44.635160 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:44.704482 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:44.942459 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:45.134450 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:45.208464 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:45.441916 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:45.635259 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:45.704325 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:45.941357 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:46.135636 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:46.216545 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:46.442339 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:46.635169 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:46.706531 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:46.956358 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:47.138559 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:47.205653 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:47.442493 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:47.634640 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:47.704158 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:47.941490 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:48.134406 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:48.204758 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:48.442541 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:48.635463 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:48.703998 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:48.941580 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:49.134750 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:49.204314 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:49.450461 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:49.635123 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:49.704559 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:49.941583 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:50.134635 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:50.204194 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:50.441124 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:50.635092 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:50.704236 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:50.941694 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:51.134689 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:51.204049 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:51.445156 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:51.634522 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:51.704277 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:51.942133 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:52.135010 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:52.204244 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:52.441967 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:52.634710 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:52.704444 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:52.941852 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:53.135494 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:53.205690 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:53.441805 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:53.636528 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:53.704422 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:53.942071 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:54.134678 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:54.204342 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:54.442353 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:54.635093 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:54.704801 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:54.959343 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:55.134869 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:55.204628 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:55.441899 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:55.634180 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:55.706037 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:55.943385 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:56.134172 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:56.203867 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:56.442122 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:56.636160 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:56.705263 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:56.941444 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:57.135330 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:57.204133 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:57.441117 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:57.635409 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:57.706443 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:57.940915 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:58.136473 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:58.204446 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:58.442453 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:58.636613 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:58.707573 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:58.942493 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:59.134153 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:59.203954 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:59.478057 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:52:59.638866 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:52:59.708861 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:52:59.942989 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:53:00.142610 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:00.208773 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:00.442178 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:53:00.635208 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:00.703982 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:00.941272 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:53:01.135463 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:01.204644 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:01.442205 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:53:01.636203 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:01.706287 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:01.942049 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:53:02.134978 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:02.207212 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:02.440991 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:53:02.634857 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:02.703946 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:02.942350 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:53:03.134712 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:03.206219 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:03.441883 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:53:03.634952 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:03.704863 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:03.941780 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:53:04.134786 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:04.204048 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:04.441399 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:53:04.634785 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:04.704536 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:04.941422 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:53:05.136002 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:05.204589 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:05.442468 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:53:05.634907 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:05.705367 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:05.941212 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:53:06.135249 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:06.204766 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:06.441885 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:53:06.634384 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:06.713795 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:06.942333 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:53:07.134741 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:07.205383 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:07.441957 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0530 20:53:07.635379 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:07.704767 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:07.941588 2294803 kapi.go:107] duration metric: took 45.027960536s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0530 20:53:08.135071 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:08.204501 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:08.634163 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:08.704607 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:09.134268 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:09.204390 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:09.634650 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:09.703766 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:10.135548 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:10.204688 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:10.634759 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:10.703975 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:11.135409 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:11.204481 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:11.634771 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:11.704601 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:12.134083 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:12.203479 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:12.634464 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:12.703995 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:13.135057 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:13.203684 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:13.634621 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:13.703989 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:14.135164 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:14.203630 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:14.635283 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:14.704282 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:15.134868 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:15.203819 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:15.635042 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:15.705048 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:16.134972 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:16.204280 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:16.635079 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:16.704367 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:17.135754 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:17.204735 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:17.635146 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:17.713606 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:18.134805 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:18.204305 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:18.634154 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:18.704160 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:19.134369 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:19.204538 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:19.634730 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:19.704151 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:20.135132 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:20.204172 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:20.634166 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:20.703890 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:21.134633 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:21.203793 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:21.634902 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:21.704647 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:22.134130 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:22.204421 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:22.634470 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:22.705723 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:23.134575 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:23.203813 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:23.635003 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:23.704150 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:24.134621 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:24.204448 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:24.634906 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:24.704591 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:25.134666 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:25.204050 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:25.634036 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:25.703939 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:26.134711 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:26.203642 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:26.634645 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:26.704208 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:27.134238 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:27.204090 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:27.634117 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:27.704158 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:28.135090 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:28.203786 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:28.634924 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:28.704837 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:29.135329 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:29.204038 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:29.635541 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:29.704769 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:30.135543 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:30.205240 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:30.634242 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:30.704457 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:31.135736 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:31.204205 2294803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0530 20:53:31.642746 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:31.707193 2294803 kapi.go:107] duration metric: took 1m11.014719858s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0530 20:53:32.135606 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:32.635583 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:33.135551 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:33.634718 2294803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0530 20:53:34.134462 2294803 kapi.go:107] duration metric: took 1m10.01406186s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0530 20:53:34.136636 2294803 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-084881 cluster.
	I0530 20:53:34.138602 2294803 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0530 20:53:34.140250 2294803 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0530 20:53:34.142226 2294803 out.go:177] * Enabled addons: storage-provisioner, cloud-spanner, default-storageclass, ingress-dns, inspektor-gadget, metrics-server, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0530 20:53:34.144163 2294803 addons.go:499] enable addons completed in 1m20.842182702s: enabled=[storage-provisioner cloud-spanner default-storageclass ingress-dns inspektor-gadget metrics-server volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0530 20:53:34.144216 2294803 start.go:233] waiting for cluster config update ...
	I0530 20:53:34.144237 2294803 start.go:242] writing updated cluster config ...
	I0530 20:53:34.144570 2294803 ssh_runner.go:195] Run: rm -f paused
	I0530 20:53:34.544197 2294803 start.go:568] kubectl: 1.27.2, cluster: 1.27.2 (minor skew: 0)
	I0530 20:53:34.546471 2294803 out.go:177] * Done! kubectl is now configured to use "addons-084881" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	93dfb542969d4       13753a81eccfd       7 seconds ago        Exited              hello-world-app           2                   22c1f8b9c956e       hello-world-app-65bdb79f98-44ph7
	551bea2df637f       60dc18151daf8       27 seconds ago       Exited              registry-proxy            5                   643557ef2450e       registry-proxy-d7x5f
	7bf43ab6b9484       5ee47dcca7543       31 seconds ago       Running             nginx                     0                   ee38b42c0afa0       nginx
	a77450474f31d       d23bd5d730ccb       About a minute ago   Running             headlamp                  0                   4c487c1039183       headlamp-6b5756787-jnx5p
	c8c7332bcef1c       2a5f29343eb03       2 minutes ago        Running             gcp-auth                  0                   209039fefd2d4       gcp-auth-58478865f7-bmjjs
	b949756db7d4c       97a306084391e       2 minutes ago        Exited              patch                     0                   c2bc14fa80b95       ingress-nginx-admission-patch-59dmz
	f560ad7683c24       97a306084391e       3 minutes ago        Exited              create                    0                   80389ce167a17       ingress-nginx-admission-create-k4wcr
	806f0099e7aac       4206ae70dd039       3 minutes ago        Running             registry                  0                   4df08001bde43       registry-j74nb
	5c5b68bdceed1       97e04611ad434       3 minutes ago        Running             coredns                   0                   3ca0513984547       coredns-5d78c9869d-ksg2p
	3d719535e72b7       ba04bb24b9575       3 minutes ago        Running             storage-provisioner       0                   f9983f0245a01       storage-provisioner
	af06deacbe464       b18bf71b941ba       3 minutes ago        Running             kindnet-cni               0                   b46bb2e7a2e33       kindnet-rfjr4
	eb52e16f0bf4f       29921a0845422       3 minutes ago        Running             kube-proxy                0                   c8bdc3b60665e       kube-proxy-427l8
	082669e89f69c       305d7ed1dae28       4 minutes ago        Running             kube-scheduler            0                   fe6d9b8c5d54e       kube-scheduler-addons-084881
	1c58be7ab9b9a       2ee705380c3c5       4 minutes ago        Running             kube-controller-manager   0                   1c5bc2fdf710f       kube-controller-manager-addons-084881
	8ec23ffd6cce0       72c9df6be7f1b       4 minutes ago        Running             kube-apiserver            0                   ace0f0469b5be       kube-apiserver-addons-084881
	613a3ce5b6219       24bc64e911039       4 minutes ago        Running             etcd                      0                   3cd4b1a2114b7       etcd-addons-084881
	
	* 
	* ==> containerd <==
	* May 30 20:55:45 addons-084881 containerd[739]: time="2023-05-30T20:55:45.756447848Z" level=warning msg="cleaning up after shim disconnected" id=469ab2ba1cb954f53bdfda0275888924146b0afbb61d9f6eef83e10b3b5247bf namespace=k8s.io
	May 30 20:55:45 addons-084881 containerd[739]: time="2023-05-30T20:55:45.756459064Z" level=info msg="cleaning up dead shim"
	May 30 20:55:45 addons-084881 containerd[739]: time="2023-05-30T20:55:45.769025497Z" level=warning msg="cleanup warnings time=\"2023-05-30T20:55:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=9643 runtime=io.containerd.runc.v2\n"
	May 30 20:55:45 addons-084881 containerd[739]: time="2023-05-30T20:55:45.772570148Z" level=info msg="StopContainer for \"469ab2ba1cb954f53bdfda0275888924146b0afbb61d9f6eef83e10b3b5247bf\" returns successfully"
	May 30 20:55:45 addons-084881 containerd[739]: time="2023-05-30T20:55:45.773419282Z" level=info msg="StopPodSandbox for \"dbb4bf4db5a470b8b195496c38fb681e8fe7706c8626d4bbc647273446c9e7f9\""
	May 30 20:55:45 addons-084881 containerd[739]: time="2023-05-30T20:55:45.773500405Z" level=info msg="Container to stop \"469ab2ba1cb954f53bdfda0275888924146b0afbb61d9f6eef83e10b3b5247bf\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	May 30 20:55:45 addons-084881 containerd[739]: time="2023-05-30T20:55:45.814370008Z" level=info msg="shim disconnected" id=dbb4bf4db5a470b8b195496c38fb681e8fe7706c8626d4bbc647273446c9e7f9
	May 30 20:55:45 addons-084881 containerd[739]: time="2023-05-30T20:55:45.814434557Z" level=warning msg="cleaning up after shim disconnected" id=dbb4bf4db5a470b8b195496c38fb681e8fe7706c8626d4bbc647273446c9e7f9 namespace=k8s.io
	May 30 20:55:45 addons-084881 containerd[739]: time="2023-05-30T20:55:45.814447095Z" level=info msg="cleaning up dead shim"
	May 30 20:55:45 addons-084881 containerd[739]: time="2023-05-30T20:55:45.827433348Z" level=warning msg="cleanup warnings time=\"2023-05-30T20:55:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=9674 runtime=io.containerd.runc.v2\n"
	May 30 20:55:45 addons-084881 containerd[739]: time="2023-05-30T20:55:45.838388742Z" level=info msg="CreateContainer within sandbox \"22c1f8b9c956e3afa5dc2f67efa47d1ec04e23a8094031984587f00c6ba0ff3b\" for container &ContainerMetadata{Name:hello-world-app,Attempt:2,}"
	May 30 20:55:45 addons-084881 containerd[739]: time="2023-05-30T20:55:45.857706692Z" level=info msg="CreateContainer within sandbox \"22c1f8b9c956e3afa5dc2f67efa47d1ec04e23a8094031984587f00c6ba0ff3b\" for &ContainerMetadata{Name:hello-world-app,Attempt:2,} returns container id \"93dfb542969d4a4aa851013bc5d06934dd6d91c436c6d4cbc37aaf3376b4bb7a\""
	May 30 20:55:45 addons-084881 containerd[739]: time="2023-05-30T20:55:45.858680238Z" level=info msg="StartContainer for \"93dfb542969d4a4aa851013bc5d06934dd6d91c436c6d4cbc37aaf3376b4bb7a\""
	May 30 20:55:45 addons-084881 containerd[739]: time="2023-05-30T20:55:45.921259623Z" level=info msg="TearDown network for sandbox \"dbb4bf4db5a470b8b195496c38fb681e8fe7706c8626d4bbc647273446c9e7f9\" successfully"
	May 30 20:55:45 addons-084881 containerd[739]: time="2023-05-30T20:55:45.921321440Z" level=info msg="StopPodSandbox for \"dbb4bf4db5a470b8b195496c38fb681e8fe7706c8626d4bbc647273446c9e7f9\" returns successfully"
	May 30 20:55:45 addons-084881 containerd[739]: time="2023-05-30T20:55:45.940740730Z" level=info msg="StartContainer for \"93dfb542969d4a4aa851013bc5d06934dd6d91c436c6d4cbc37aaf3376b4bb7a\" returns successfully"
	May 30 20:55:45 addons-084881 containerd[739]: time="2023-05-30T20:55:45.969279393Z" level=info msg="shim disconnected" id=93dfb542969d4a4aa851013bc5d06934dd6d91c436c6d4cbc37aaf3376b4bb7a
	May 30 20:55:45 addons-084881 containerd[739]: time="2023-05-30T20:55:45.969410985Z" level=warning msg="cleaning up after shim disconnected" id=93dfb542969d4a4aa851013bc5d06934dd6d91c436c6d4cbc37aaf3376b4bb7a namespace=k8s.io
	May 30 20:55:45 addons-084881 containerd[739]: time="2023-05-30T20:55:45.969423597Z" level=info msg="cleaning up dead shim"
	May 30 20:55:45 addons-084881 containerd[739]: time="2023-05-30T20:55:45.982532999Z" level=warning msg="cleanup warnings time=\"2023-05-30T20:55:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=9765 runtime=io.containerd.runc.v2\n"
	May 30 20:55:46 addons-084881 containerd[739]: time="2023-05-30T20:55:46.106803111Z" level=info msg="RemoveContainer for \"469ab2ba1cb954f53bdfda0275888924146b0afbb61d9f6eef83e10b3b5247bf\""
	May 30 20:55:46 addons-084881 containerd[739]: time="2023-05-30T20:55:46.123343984Z" level=info msg="RemoveContainer for \"469ab2ba1cb954f53bdfda0275888924146b0afbb61d9f6eef83e10b3b5247bf\" returns successfully"
	May 30 20:55:46 addons-084881 containerd[739]: time="2023-05-30T20:55:46.123909477Z" level=error msg="ContainerStatus for \"469ab2ba1cb954f53bdfda0275888924146b0afbb61d9f6eef83e10b3b5247bf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"469ab2ba1cb954f53bdfda0275888924146b0afbb61d9f6eef83e10b3b5247bf\": not found"
	May 30 20:55:46 addons-084881 containerd[739]: time="2023-05-30T20:55:46.125441772Z" level=info msg="RemoveContainer for \"d4c2b39fbe5ff5809638293030447f2d7cb75d96ae9563f664f9be2afc39a804\""
	May 30 20:55:46 addons-084881 containerd[739]: time="2023-05-30T20:55:46.132040534Z" level=info msg="RemoveContainer for \"d4c2b39fbe5ff5809638293030447f2d7cb75d96ae9563f664f9be2afc39a804\" returns successfully"
	
	* 
	* ==> coredns [5c5b68bdceed16eefa52edb2364ef317c0a66325b87172adf4ac8c66243c3c54] <==
	* [INFO] 10.244.0.16:43421 - 25818 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000089542s
	[INFO] 10.244.0.16:57513 - 57338 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002253274s
	[INFO] 10.244.0.16:43421 - 40194 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001919705s
	[INFO] 10.244.0.16:57513 - 39324 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002461952s
	[INFO] 10.244.0.16:43421 - 42269 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001572442s
	[INFO] 10.244.0.16:57513 - 38592 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000990261s
	[INFO] 10.244.0.16:43421 - 1348 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000111409s
	[INFO] 10.244.0.16:49458 - 242 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000117865s
	[INFO] 10.244.0.16:49441 - 54533 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000040837s
	[INFO] 10.244.0.16:49441 - 30268 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000053456s
	[INFO] 10.244.0.16:49458 - 33152 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000033977s
	[INFO] 10.244.0.16:49441 - 19840 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000062276s
	[INFO] 10.244.0.16:49458 - 20433 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000035872s
	[INFO] 10.244.0.16:49458 - 59127 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000043757s
	[INFO] 10.244.0.16:49441 - 6181 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000036053s
	[INFO] 10.244.0.16:49458 - 24142 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000045194s
	[INFO] 10.244.0.16:49441 - 21192 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000027003s
	[INFO] 10.244.0.16:49458 - 63777 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000048943s
	[INFO] 10.244.0.16:49441 - 52230 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000030761s
	[INFO] 10.244.0.16:49458 - 54212 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001382241s
	[INFO] 10.244.0.16:49441 - 48146 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001158875s
	[INFO] 10.244.0.16:49458 - 25488 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001087457s
	[INFO] 10.244.0.16:49458 - 40952 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000160302s
	[INFO] 10.244.0.16:49441 - 30588 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000960164s
	[INFO] 10.244.0.16:49441 - 58927 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000074666s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-084881
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-084881
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d0d5d534b34391ed9438fcde26494d33a798fae
	                    minikube.k8s.io/name=addons-084881
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_05_30T20_52_01_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-084881
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 30 May 2023 20:51:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-084881
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 30 May 2023 20:55:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 30 May 2023 20:55:34 +0000   Tue, 30 May 2023 20:51:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 30 May 2023 20:55:34 +0000   Tue, 30 May 2023 20:51:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 30 May 2023 20:55:34 +0000   Tue, 30 May 2023 20:51:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 30 May 2023 20:55:34 +0000   Tue, 30 May 2023 20:52:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-084881
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022628Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022628Ki
	  pods:               110
	System Info:
	  Machine ID:                 ebdb46a61c784da58e322d5ccf84a80e
	  System UUID:                32c0a95d-4ed7-4b24-a3c4-9fbe00414871
	  Boot ID:                    c7a134eb-0be2-46e6-bcc1-b9fd815daa7a
	  Kernel Version:             5.15.0-1036-aws
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.21
	  Kubelet Version:            v1.27.2
	  Kube-Proxy Version:         v1.27.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-65bdb79f98-44ph7         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         33s
	  gcp-auth                    gcp-auth-58478865f7-bmjjs                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	  headlamp                    headlamp-6b5756787-jnx5p                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m12s
	  kube-system                 coredns-5d78c9869d-ksg2p                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     3m40s
	  kube-system                 etcd-addons-084881                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         3m52s
	  kube-system                 kindnet-rfjr4                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m40s
	  kube-system                 kube-apiserver-addons-084881             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m52s
	  kube-system                 kube-controller-manager-addons-084881    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m52s
	  kube-system                 kube-proxy-427l8                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m40s
	  kube-system                 kube-scheduler-addons-084881             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m52s
	  kube-system                 registry-j74nb                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m35s
	  kube-system                 registry-proxy-d7x5f                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m35s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m38s                kube-proxy       
	  Normal  NodeHasSufficientMemory  4m1s (x8 over 4m1s)  kubelet          Node addons-084881 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m1s (x8 over 4m1s)  kubelet          Node addons-084881 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m1s (x7 over 4m1s)  kubelet          Node addons-084881 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m53s                kubelet          Node addons-084881 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m53s                kubelet          Node addons-084881 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m53s                kubelet          Node addons-084881 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             3m53s                kubelet          Node addons-084881 status is now: NodeNotReady
	  Normal  Starting                 3m53s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  3m52s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m52s                kubelet          Node addons-084881 status is now: NodeReady
	  Normal  RegisteredNode           3m40s                node-controller  Node addons-084881 event: Registered Node addons-084881 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000728] FS-Cache: N-cookie c=000001cc [p=000001c3 fl=2 nc=0 na=1]
	[  +0.000975] FS-Cache: N-cookie d=00000000623fbe05{9p.inode} n=000000009d169f1a
	[  +0.001295] FS-Cache: N-key=[8] '34635c0100000000'
	[  +0.002506] FS-Cache: Duplicate cookie detected
	[  +0.000709] FS-Cache: O-cookie c=000001c6 [p=000001c3 fl=226 nc=0 na=1]
	[  +0.001023] FS-Cache: O-cookie d=00000000623fbe05{9p.inode} n=0000000053231b04
	[  +0.001170] FS-Cache: O-key=[8] '34635c0100000000'
	[  +0.000837] FS-Cache: N-cookie c=000001cd [p=000001c3 fl=2 nc=0 na=1]
	[  +0.000959] FS-Cache: N-cookie d=00000000623fbe05{9p.inode} n=00000000296365f9
	[  +0.001089] FS-Cache: N-key=[8] '34635c0100000000'
	[  +3.513956] FS-Cache: Duplicate cookie detected
	[  +0.000714] FS-Cache: O-cookie c=000001c4 [p=000001c3 fl=226 nc=0 na=1]
	[  +0.001011] FS-Cache: O-cookie d=00000000623fbe05{9p.inode} n=000000008e490977
	[  +0.001097] FS-Cache: O-key=[8] '33635c0100000000'
	[  +0.000721] FS-Cache: N-cookie c=000001cf [p=000001c3 fl=2 nc=0 na=1]
	[  +0.000957] FS-Cache: N-cookie d=00000000623fbe05{9p.inode} n=00000000e4e5b2b9
	[  +0.001067] FS-Cache: N-key=[8] '33635c0100000000'
	[  +0.410886] FS-Cache: Duplicate cookie detected
	[  +0.000719] FS-Cache: O-cookie c=000001c9 [p=000001c3 fl=226 nc=0 na=1]
	[  +0.001041] FS-Cache: O-cookie d=00000000623fbe05{9p.inode} n=000000004ccc93d9
	[  +0.001066] FS-Cache: O-key=[8] '39635c0100000000'
	[  +0.000717] FS-Cache: N-cookie c=000001d0 [p=000001c3 fl=2 nc=0 na=1]
	[  +0.000952] FS-Cache: N-cookie d=00000000623fbe05{9p.inode} n=000000009d169f1a
	[  +0.001073] FS-Cache: N-key=[8] '39635c0100000000'
	[May30 20:31] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	
	* 
	* ==> etcd [613a3ce5b62193d205696707e5572226751511429928e4165faa5be3182ba9e0] <==
	* {"level":"info","ts":"2023-05-30T20:51:53.126Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2023-05-30T20:51:53.126Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2023-05-30T20:51:53.129Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-05-30T20:51:53.130Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-05-30T20:51:53.130Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-05-30T20:51:53.130Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-05-30T20:51:53.130Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-05-30T20:51:53.213Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2023-05-30T20:51:53.214Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2023-05-30T20:51:53.214Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2023-05-30T20:51:53.214Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2023-05-30T20:51:53.214Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-05-30T20:51:53.214Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2023-05-30T20:51:53.214Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-05-30T20:51:53.217Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-30T20:51:53.219Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-084881 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-05-30T20:51:53.219Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-05-30T20:51:53.219Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-30T20:51:53.219Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-30T20:51:53.219Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-30T20:51:53.219Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-05-30T20:51:53.220Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-05-30T20:51:53.235Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-05-30T20:51:53.247Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-05-30T20:51:53.248Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> gcp-auth [c8c7332bcef1cf05a3e6ed6b8a893978e3f6139d7d559af9f6b0a97a3bfe8f89] <==
	* 2023/05/30 20:53:32 GCP Auth Webhook started!
	2023/05/30 20:53:41 Ready to marshal response ...
	2023/05/30 20:53:41 Ready to write response ...
	2023/05/30 20:53:41 Ready to marshal response ...
	2023/05/30 20:53:41 Ready to write response ...
	2023/05/30 20:53:41 Ready to marshal response ...
	2023/05/30 20:53:41 Ready to write response ...
	2023/05/30 20:53:44 Ready to marshal response ...
	2023/05/30 20:53:44 Ready to write response ...
	2023/05/30 20:54:23 Ready to marshal response ...
	2023/05/30 20:54:23 Ready to write response ...
	2023/05/30 20:54:46 Ready to marshal response ...
	2023/05/30 20:54:46 Ready to write response ...
	2023/05/30 20:55:20 Ready to marshal response ...
	2023/05/30 20:55:20 Ready to write response ...
	2023/05/30 20:55:27 Ready to marshal response ...
	2023/05/30 20:55:27 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  20:55:53 up 2 days, 37 min,  0 users,  load average: 1.54, 1.92, 2.65
	Linux addons-084881 5.15.0-1036-aws #40~20.04.1-Ubuntu SMP Mon Apr 24 00:20:54 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [af06deacbe464051e210a3f055c95c3588f886041c59ccc0c8b936cec9c6fcb3] <==
	* I0530 20:53:45.221415       1 main.go:227] handling current node
	I0530 20:53:55.231714       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0530 20:53:55.231746       1 main.go:227] handling current node
	I0530 20:54:05.241602       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0530 20:54:05.242839       1 main.go:227] handling current node
	I0530 20:54:15.247260       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0530 20:54:15.247290       1 main.go:227] handling current node
	I0530 20:54:25.259235       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0530 20:54:25.259262       1 main.go:227] handling current node
	I0530 20:54:35.272932       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0530 20:54:35.272962       1 main.go:227] handling current node
	I0530 20:54:45.278088       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0530 20:54:45.278115       1 main.go:227] handling current node
	I0530 20:54:55.301358       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0530 20:54:55.301712       1 main.go:227] handling current node
	I0530 20:55:05.314331       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0530 20:55:05.314360       1 main.go:227] handling current node
	I0530 20:55:15.318606       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0530 20:55:15.318634       1 main.go:227] handling current node
	I0530 20:55:25.330765       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0530 20:55:25.330793       1 main.go:227] handling current node
	I0530 20:55:35.343459       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0530 20:55:35.343489       1 main.go:227] handling current node
	I0530 20:55:45.347570       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0530 20:55:45.347598       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [8ec23ffd6cce04b9b6f4d47dce22553c2db522c25837a81c821cb207293f180e] <==
	* I0530 20:55:02.368611       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0530 20:55:02.368670       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0530 20:55:02.414236       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0530 20:55:02.414490       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0530 20:55:02.424175       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0530 20:55:02.424237       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0530 20:55:02.441166       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0530 20:55:02.441231       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0530 20:55:02.457780       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0530 20:55:02.458002       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0530 20:55:03.329890       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0530 20:55:03.458820       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0530 20:55:03.467294       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0530 20:55:13.604861       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0530 20:55:13.619198       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0530 20:55:14.639684       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0530 20:55:19.843308       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0530 20:55:20.277723       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs=map[IPv4:10.109.81.166]
	I0530 20:55:28.076109       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs=map[IPv4:10.107.241.181]
	E0530 20:55:40.156967       1 handler_proxy.go:144] error resolving kube-system/metrics-server: service "metrics-server" not found
	W0530 20:55:40.156998       1 handler_proxy.go:100] no RequestInfo found in the context
	E0530 20:55:40.157036       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0530 20:55:40.157213       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0530 20:55:40.189766       1 controller.go:132] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	
	* 
	* ==> kube-controller-manager [1c58be7ab9b9a60dc2a4d17caa2d6e2c41f7f2e0fac53a9a8c8a5359e64b3029] <==
	* W0530 20:55:19.102954       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0530 20:55:19.102997       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0530 20:55:21.891332       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0530 20:55:21.891367       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0530 20:55:22.196310       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0530 20:55:22.196353       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0530 20:55:22.519076       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0530 20:55:22.519110       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0530 20:55:23.730841       1 namespace_controller.go:182] "Namespace has been deleted" namespace="gadget"
	I0530 20:55:27.831548       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-65bdb79f98 to 1"
	I0530 20:55:27.847432       1 event.go:307] "Event occurred" object="default/hello-world-app-65bdb79f98" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-65bdb79f98-44ph7"
	W0530 20:55:34.832622       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0530 20:55:34.832659       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0530 20:55:38.512134       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0530 20:55:38.512170       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0530 20:55:42.072775       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0530 20:55:42.072812       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0530 20:55:43.284959       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0530 20:55:43.285058       1 shared_informer.go:318] Caches are synced for resource quota
	I0530 20:55:44.586961       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-create
	I0530 20:55:44.602547       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-patch
	W0530 20:55:46.079616       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0530 20:55:46.079654       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0530 20:55:50.964709       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0530 20:55:50.964742       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [eb52e16f0bf4fd3420874af03cdfa2de7cce1c26b1df997e5d916dd56f889860] <==
	* I0530 20:52:14.643037       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0530 20:52:14.643146       1 server_others.go:110] "Detected node IP" address="192.168.49.2"
	I0530 20:52:14.643170       1 server_others.go:551] "Using iptables proxy"
	I0530 20:52:14.682528       1 server_others.go:190] "Using iptables Proxier"
	I0530 20:52:14.682567       1 server_others.go:197] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0530 20:52:14.682576       1 server_others.go:198] "Creating dualStackProxier for iptables"
	I0530 20:52:14.682589       1 server_others.go:481] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0530 20:52:14.682654       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0530 20:52:14.683208       1 server.go:657] "Version info" version="v1.27.2"
	I0530 20:52:14.683222       1 server.go:659] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0530 20:52:14.688912       1 config.go:188] "Starting service config controller"
	I0530 20:52:14.688938       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0530 20:52:14.688967       1 config.go:97] "Starting endpoint slice config controller"
	I0530 20:52:14.688989       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0530 20:52:14.690281       1 config.go:315] "Starting node config controller"
	I0530 20:52:14.690295       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0530 20:52:14.789954       1 shared_informer.go:318] Caches are synced for service config
	I0530 20:52:14.789942       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0530 20:52:14.790541       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [082669e89f69c1be4381226272cd9592929ec10e38177b6b7c6a68cbd4a02017] <==
	* W0530 20:51:57.392999       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0530 20:51:57.393106       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0530 20:51:57.393026       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0530 20:51:57.393189       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0530 20:51:57.393250       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0530 20:51:57.393351       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0530 20:51:57.393410       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0530 20:51:57.393356       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0530 20:51:57.393582       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0530 20:51:57.393601       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0530 20:51:58.319511       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0530 20:51:58.319757       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0530 20:51:58.337207       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0530 20:51:58.337245       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0530 20:51:58.343950       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0530 20:51:58.343987       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0530 20:51:58.360726       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0530 20:51:58.360770       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0530 20:51:58.366675       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0530 20:51:58.366713       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0530 20:51:58.509998       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0530 20:51:58.510042       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0530 20:51:58.538321       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0530 20:51:58.538365       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0530 20:52:00.174684       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* May 30 20:55:43 addons-084881 kubelet[1349]: I0530 20:55:43.923994    1349 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-j8sh9\" (UniqueName: \"kubernetes.io/projected/2cc16a5d-5d03-4b00-bbf5-84737deefcd5-kube-api-access-j8sh9\") on node \"addons-084881\" DevicePath \"\""
	May 30 20:55:44 addons-084881 kubelet[1349]: I0530 20:55:44.093450    1349 scope.go:115] "RemoveContainer" containerID="867c744ddf762e0a9fb1a1d08cff3603249b36d9501c979ca710c56c26b500d5"
	May 30 20:55:44 addons-084881 kubelet[1349]: E0530 20:55:44.653726    1349 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-858bcd4f57-tf65v.17640742f1752b81", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-858bcd4f57-tf65v", UID:"7a2c6505-5517-4877-a991-0ba7e1c365ac", APIVersion:"v1", ResourceVersion:"649", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stopping container controller", Source:v1.EventSource{Componen
t:"kubelet", Host:"addons-084881"}, FirstTimestamp:time.Date(2023, time.May, 30, 20, 55, 44, 643632001, time.Local), LastTimestamp:time.Date(2023, time.May, 30, 20, 55, 44, 643632001, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-858bcd4f57-tf65v.17640742f1752b81" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	May 30 20:55:44 addons-084881 kubelet[1349]: E0530 20:55:44.746086    1349 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-858bcd4f57-tf65v.17640742f76f4d04", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-858bcd4f57-tf65v", UID:"7a2c6505-5517-4877-a991-0ba7e1c365ac", APIVersion:"v1", ResourceVersion:"649", FieldPath:"spec.containers{controller}"}, Reason:"Unhealthy", Message:"Readiness probe failed: HTTP probe failed with statuscode: 5
00", Source:v1.EventSource{Component:"kubelet", Host:"addons-084881"}, FirstTimestamp:time.Date(2023, time.May, 30, 20, 55, 44, 743910660, time.Local), LastTimestamp:time.Date(2023, time.May, 30, 20, 55, 44, 743910660, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-858bcd4f57-tf65v.17640742f76f4d04" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	May 30 20:55:44 addons-084881 kubelet[1349]: I0530 20:55:44.838941    1349 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=2cc16a5d-5d03-4b00-bbf5-84737deefcd5 path="/var/lib/kubelet/pods/2cc16a5d-5d03-4b00-bbf5-84737deefcd5/volumes"
	May 30 20:55:44 addons-084881 kubelet[1349]: I0530 20:55:44.839434    1349 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=8d811343-7acb-462b-ae51-369a125b05fe path="/var/lib/kubelet/pods/8d811343-7acb-462b-ae51-369a125b05fe/volumes"
	May 30 20:55:44 addons-084881 kubelet[1349]: I0530 20:55:44.839794    1349 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=ec5d3404-df37-4e88-9a5e-5493b3057561 path="/var/lib/kubelet/pods/ec5d3404-df37-4e88-9a5e-5493b3057561/volumes"
	May 30 20:55:45 addons-084881 kubelet[1349]: I0530 20:55:45.835408    1349 scope.go:115] "RemoveContainer" containerID="d4c2b39fbe5ff5809638293030447f2d7cb75d96ae9563f664f9be2afc39a804"
	May 30 20:55:46 addons-084881 kubelet[1349]: I0530 20:55:46.038371    1349 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7a2c6505-5517-4877-a991-0ba7e1c365ac-webhook-cert\") pod \"7a2c6505-5517-4877-a991-0ba7e1c365ac\" (UID: \"7a2c6505-5517-4877-a991-0ba7e1c365ac\") "
	May 30 20:55:46 addons-084881 kubelet[1349]: I0530 20:55:46.038433    1349 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8d52\" (UniqueName: \"kubernetes.io/projected/7a2c6505-5517-4877-a991-0ba7e1c365ac-kube-api-access-w8d52\") pod \"7a2c6505-5517-4877-a991-0ba7e1c365ac\" (UID: \"7a2c6505-5517-4877-a991-0ba7e1c365ac\") "
	May 30 20:55:46 addons-084881 kubelet[1349]: I0530 20:55:46.040956    1349 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a2c6505-5517-4877-a991-0ba7e1c365ac-kube-api-access-w8d52" (OuterVolumeSpecName: "kube-api-access-w8d52") pod "7a2c6505-5517-4877-a991-0ba7e1c365ac" (UID: "7a2c6505-5517-4877-a991-0ba7e1c365ac"). InnerVolumeSpecName "kube-api-access-w8d52". PluginName "kubernetes.io/projected", VolumeGidValue ""
	May 30 20:55:46 addons-084881 kubelet[1349]: I0530 20:55:46.041810    1349 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a2c6505-5517-4877-a991-0ba7e1c365ac-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "7a2c6505-5517-4877-a991-0ba7e1c365ac" (UID: "7a2c6505-5517-4877-a991-0ba7e1c365ac"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	May 30 20:55:46 addons-084881 kubelet[1349]: I0530 20:55:46.101989    1349 scope.go:115] "RemoveContainer" containerID="469ab2ba1cb954f53bdfda0275888924146b0afbb61d9f6eef83e10b3b5247bf"
	May 30 20:55:46 addons-084881 kubelet[1349]: I0530 20:55:46.108569    1349 scope.go:115] "RemoveContainer" containerID="93dfb542969d4a4aa851013bc5d06934dd6d91c436c6d4cbc37aaf3376b4bb7a"
	May 30 20:55:46 addons-084881 kubelet[1349]: E0530 20:55:46.109460    1349 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-65bdb79f98-44ph7_default(5805efdb-3dd5-4083-877b-efafae72403e)\"" pod="default/hello-world-app-65bdb79f98-44ph7" podUID=5805efdb-3dd5-4083-877b-efafae72403e
	May 30 20:55:46 addons-084881 kubelet[1349]: I0530 20:55:46.123611    1349 scope.go:115] "RemoveContainer" containerID="469ab2ba1cb954f53bdfda0275888924146b0afbb61d9f6eef83e10b3b5247bf"
	May 30 20:55:46 addons-084881 kubelet[1349]: E0530 20:55:46.124123    1349 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"469ab2ba1cb954f53bdfda0275888924146b0afbb61d9f6eef83e10b3b5247bf\": not found" containerID="469ab2ba1cb954f53bdfda0275888924146b0afbb61d9f6eef83e10b3b5247bf"
	May 30 20:55:46 addons-084881 kubelet[1349]: I0530 20:55:46.124166    1349 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:469ab2ba1cb954f53bdfda0275888924146b0afbb61d9f6eef83e10b3b5247bf} err="failed to get container status \"469ab2ba1cb954f53bdfda0275888924146b0afbb61d9f6eef83e10b3b5247bf\": rpc error: code = NotFound desc = an error occurred when try to find container \"469ab2ba1cb954f53bdfda0275888924146b0afbb61d9f6eef83e10b3b5247bf\": not found"
	May 30 20:55:46 addons-084881 kubelet[1349]: I0530 20:55:46.124179    1349 scope.go:115] "RemoveContainer" containerID="d4c2b39fbe5ff5809638293030447f2d7cb75d96ae9563f664f9be2afc39a804"
	May 30 20:55:46 addons-084881 kubelet[1349]: I0530 20:55:46.139672    1349 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-w8d52\" (UniqueName: \"kubernetes.io/projected/7a2c6505-5517-4877-a991-0ba7e1c365ac-kube-api-access-w8d52\") on node \"addons-084881\" DevicePath \"\""
	May 30 20:55:46 addons-084881 kubelet[1349]: I0530 20:55:46.139721    1349 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7a2c6505-5517-4877-a991-0ba7e1c365ac-webhook-cert\") on node \"addons-084881\" DevicePath \"\""
	May 30 20:55:46 addons-084881 kubelet[1349]: I0530 20:55:46.838835    1349 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=7a2c6505-5517-4877-a991-0ba7e1c365ac path="/var/lib/kubelet/pods/7a2c6505-5517-4877-a991-0ba7e1c365ac/volumes"
	May 30 20:55:51 addons-084881 kubelet[1349]: I0530 20:55:51.835496    1349 kubelet_pods.go:894] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-d7x5f" secret="" err="secret \"gcp-auth\" not found"
	May 30 20:55:51 addons-084881 kubelet[1349]: I0530 20:55:51.835568    1349 scope.go:115] "RemoveContainer" containerID="551bea2df637f3ba8622e71aa5eb85655dadba6002dc82d53fc39131a5935468"
	May 30 20:55:51 addons-084881 kubelet[1349]: E0530 20:55:51.835879    1349 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=registry-proxy pod=registry-proxy-d7x5f_kube-system(f4513549-bfa8-495e-8b35-eee656d4eb84)\"" pod="kube-system/registry-proxy-d7x5f" podUID=f4513549-bfa8-495e-8b35-eee656d4eb84
	
	* 
	* ==> storage-provisioner [3d719535e72b7bd312068a00b86c89faa511de8b0be3bb3a8a44db591b3684bb] <==
	* I0530 20:52:18.465336       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0530 20:52:18.493704       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0530 20:52:18.493803       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0530 20:52:18.510326       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0530 20:52:18.510553       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-084881_28e3c32e-0519-47a0-aaab-5fb678849377!
	I0530 20:52:18.517740       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a6b5db0f-6381-4a92-bc9a-a880d59eb8bd", APIVersion:"v1", ResourceVersion:"508", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-084881_28e3c32e-0519-47a0-aaab-5fb678849377 became leader
	I0530 20:52:18.611223       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-084881_28e3c32e-0519-47a0-aaab-5fb678849377!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-084881 -n addons-084881
helpers_test.go:261: (dbg) Run:  kubectl --context addons-084881 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
2023/05/30 20:56:01 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/30 20:56:08 [DEBUG] GET http://192.168.49.2:5000
2023/05/30 20:56:08 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/30 20:56:08 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2023/05/30 20:56:09 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/30 20:56:09 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2023/05/30 20:56:11 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/30 20:56:11 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2023/05/30 20:56:15 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/30 20:56:15 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2023/05/30 20:56:23 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/30 20:56:32 [DEBUG] GET http://192.168.49.2:5000
2023/05/30 20:56:32 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/30 20:56:32 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2023/05/30 20:56:33 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/30 20:56:33 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2023/05/30 20:56:35 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/30 20:56:35 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2023/05/30 20:56:39 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/30 20:56:39 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2023/05/30 20:56:47 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
--- FAIL: TestAddons/parallel/Ingress (35.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:353: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 image load --daemon gcr.io/google-containers/addon-resizer:functional-812242 --alsologtostderr
functional_test.go:353: (dbg) Done: out/minikube-linux-arm64 -p functional-812242 image load --daemon gcr.io/google-containers/addon-resizer:functional-812242 --alsologtostderr: (3.749936459s)
functional_test.go:446: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 image ls
functional_test.go:441: expected "gcr.io/google-containers/addon-resizer:functional-812242" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 image load --daemon gcr.io/google-containers/addon-resizer:functional-812242 --alsologtostderr
functional_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p functional-812242 image load --daemon gcr.io/google-containers/addon-resizer:functional-812242 --alsologtostderr: (3.464972858s)
functional_test.go:446: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 image ls
functional_test.go:441: expected "gcr.io/google-containers/addon-resizer:functional-812242" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:233: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:233: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.869439647s)
functional_test.go:238: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-812242
functional_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 image load --daemon gcr.io/google-containers/addon-resizer:functional-812242 --alsologtostderr
functional_test.go:243: (dbg) Done: out/minikube-linux-arm64 -p functional-812242 image load --daemon gcr.io/google-containers/addon-resizer:functional-812242 --alsologtostderr: (3.349529947s)
functional_test.go:446: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 image ls
functional_test.go:441: expected "gcr.io/google-containers/addon-resizer:functional-812242" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:378: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 image save gcr.io/google-containers/addon-resizer:functional-812242 /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:384: expected "/home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:409: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I0530 21:00:57.426568 2319433 out.go:296] Setting OutFile to fd 1 ...
	I0530 21:00:57.428205 2319433 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 21:00:57.428228 2319433 out.go:309] Setting ErrFile to fd 2...
	I0530 21:00:57.428235 2319433 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 21:00:57.428454 2319433 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16597-2288886/.minikube/bin
	I0530 21:00:57.429179 2319433 config.go:182] Loaded profile config "functional-812242": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.2
	I0530 21:00:57.429386 2319433 config.go:182] Loaded profile config "functional-812242": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.2
	I0530 21:00:57.429998 2319433 cli_runner.go:164] Run: docker container inspect functional-812242 --format={{.State.Status}}
	I0530 21:00:57.471889 2319433 ssh_runner.go:195] Run: systemctl --version
	I0530 21:00:57.471985 2319433 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-812242
	I0530 21:00:57.494618 2319433 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40956 SSHKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/machines/functional-812242/id_rsa Username:docker}
	I0530 21:00:57.588805 2319433 cache_images.go:286] Loading image from: /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar
	W0530 21:00:57.588970 2319433 cache_images.go:254] Failed to load cached images for profile functional-812242. make sure the profile is running. loading images: stat /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar: no such file or directory
	I0530 21:00:57.589012 2319433 cache_images.go:262] succeeded pushing to: 
	I0530 21:00:57.589045 2319433 cache_images.go:263] failed pushing to: functional-812242

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.24s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (57.79s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-208395 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-208395 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (16.97893909s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-208395 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-208395 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [17dedf08-1019-4d4a-9ec8-66db0edc8b65] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [17dedf08-1019-4d4a-9ec8-66db0edc8b65] Running
E0530 21:04:02.254374 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/client.crt: no such file or directory
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 11.018756859s
addons_test.go:238: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-208395 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-208395 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-208395 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.019911049s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-208395 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-208395 addons disable ingress-dns --alsologtostderr -v=1: (3.582645817s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-208395 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-208395 addons disable ingress --alsologtostderr -v=1: (7.362358255s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-208395
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-208395:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e1afd1aea6e6faef31b8e713c9e3fdcc7ca21f978cb5b6ffc956c25b4f91bec2",
	        "Created": "2023-05-30T21:02:16.753056594Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2324068,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-05-30T21:02:17.070774498Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:dee4774de4f99268e16c76379a36a4607bed47635a069a7e60c17cd24d9aaa76",
	        "ResolvConfPath": "/var/lib/docker/containers/e1afd1aea6e6faef31b8e713c9e3fdcc7ca21f978cb5b6ffc956c25b4f91bec2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e1afd1aea6e6faef31b8e713c9e3fdcc7ca21f978cb5b6ffc956c25b4f91bec2/hostname",
	        "HostsPath": "/var/lib/docker/containers/e1afd1aea6e6faef31b8e713c9e3fdcc7ca21f978cb5b6ffc956c25b4f91bec2/hosts",
	        "LogPath": "/var/lib/docker/containers/e1afd1aea6e6faef31b8e713c9e3fdcc7ca21f978cb5b6ffc956c25b4f91bec2/e1afd1aea6e6faef31b8e713c9e3fdcc7ca21f978cb5b6ffc956c25b4f91bec2-json.log",
	        "Name": "/ingress-addon-legacy-208395",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-208395:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-208395",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ac16f65173af3600ee05eaf691b677fe64beb09e492418d1e16be4c81c23d853-init/diff:/var/lib/docker/overlay2/e2ed5c199a0c2e09246fd5671b525fc670ce3dff10bd06ad0c2ad37b9496c295/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ac16f65173af3600ee05eaf691b677fe64beb09e492418d1e16be4c81c23d853/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ac16f65173af3600ee05eaf691b677fe64beb09e492418d1e16be4c81c23d853/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ac16f65173af3600ee05eaf691b677fe64beb09e492418d1e16be4c81c23d853/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-208395",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-208395/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-208395",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-208395",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-208395",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3a3ae57a1de175cfa286f7d5e8094154cbbae766c586d26a7dc390718ef3c446",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "40961"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "40960"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "40957"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "40959"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "40958"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/3a3ae57a1de1",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-208395": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "e1afd1aea6e6",
	                        "ingress-addon-legacy-208395"
	                    ],
	                    "NetworkID": "f96b098576fd4fd6d5e1de64bac8897f8cfb8cf4440b5e25a7f62222cf7aca4f",
	                    "EndpointID": "f0c4cea9f5fb279bd957f54e56fbe4ae8d370d35b94792a422c61ebf51d02c1a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-208395 -n ingress-addon-legacy-208395
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-208395 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-208395 logs -n 25: (1.466756831s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|--------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                 Args                 |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|--------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-812242 ssh findmnt        | functional-812242           | jenkins | v1.30.1 | 30 May 23 21:01 UTC | 30 May 23 21:01 UTC |
	|                | -T /mount2                           |                             |         |         |                     |                     |
	| ssh            | functional-812242 ssh findmnt        | functional-812242           | jenkins | v1.30.1 | 30 May 23 21:01 UTC | 30 May 23 21:01 UTC |
	|                | -T /mount3                           |                             |         |         |                     |                     |
	| mount          | -p functional-812242                 | functional-812242           | jenkins | v1.30.1 | 30 May 23 21:01 UTC |                     |
	|                | --kill=true                          |                             |         |         |                     |                     |
	| start          | -p functional-812242                 | functional-812242           | jenkins | v1.30.1 | 30 May 23 21:01 UTC |                     |
	|                | --dry-run --memory                   |                             |         |         |                     |                     |
	|                | 250MB --alsologtostderr              |                             |         |         |                     |                     |
	|                | --driver=docker                      |                             |         |         |                     |                     |
	|                | --container-runtime=containerd       |                             |         |         |                     |                     |
	| start          | -p functional-812242                 | functional-812242           | jenkins | v1.30.1 | 30 May 23 21:01 UTC |                     |
	|                | --dry-run --alsologtostderr          |                             |         |         |                     |                     |
	|                | -v=1 --driver=docker                 |                             |         |         |                     |                     |
	|                | --container-runtime=containerd       |                             |         |         |                     |                     |
	| start          | -p functional-812242                 | functional-812242           | jenkins | v1.30.1 | 30 May 23 21:01 UTC |                     |
	|                | --dry-run --memory                   |                             |         |         |                     |                     |
	|                | 250MB --alsologtostderr              |                             |         |         |                     |                     |
	|                | --driver=docker                      |                             |         |         |                     |                     |
	|                | --container-runtime=containerd       |                             |         |         |                     |                     |
	| dashboard      | --url --port 36195                   | functional-812242           | jenkins | v1.30.1 | 30 May 23 21:01 UTC | 30 May 23 21:01 UTC |
	|                | -p functional-812242                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1               |                             |         |         |                     |                     |
	| update-context | functional-812242                    | functional-812242           | jenkins | v1.30.1 | 30 May 23 21:01 UTC | 30 May 23 21:01 UTC |
	|                | update-context                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                             |         |         |                     |                     |
	| update-context | functional-812242                    | functional-812242           | jenkins | v1.30.1 | 30 May 23 21:01 UTC | 30 May 23 21:01 UTC |
	|                | update-context                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                             |         |         |                     |                     |
	| update-context | functional-812242                    | functional-812242           | jenkins | v1.30.1 | 30 May 23 21:01 UTC | 30 May 23 21:01 UTC |
	|                | update-context                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                             |         |         |                     |                     |
	| image          | functional-812242                    | functional-812242           | jenkins | v1.30.1 | 30 May 23 21:01 UTC | 30 May 23 21:01 UTC |
	|                | image ls --format short              |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| image          | functional-812242                    | functional-812242           | jenkins | v1.30.1 | 30 May 23 21:01 UTC | 30 May 23 21:01 UTC |
	|                | image ls --format json               |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| ssh            | functional-812242 ssh pgrep          | functional-812242           | jenkins | v1.30.1 | 30 May 23 21:01 UTC |                     |
	|                | buildkitd                            |                             |         |         |                     |                     |
	| image          | functional-812242                    | functional-812242           | jenkins | v1.30.1 | 30 May 23 21:01 UTC | 30 May 23 21:01 UTC |
	|                | image ls --format yaml               |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| image          | functional-812242 image build -t     | functional-812242           | jenkins | v1.30.1 | 30 May 23 21:01 UTC | 30 May 23 21:01 UTC |
	|                | localhost/my-image:functional-812242 |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr     |                             |         |         |                     |                     |
	| image          | functional-812242                    | functional-812242           | jenkins | v1.30.1 | 30 May 23 21:01 UTC | 30 May 23 21:01 UTC |
	|                | image ls --format table              |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| image          | functional-812242 image ls           | functional-812242           | jenkins | v1.30.1 | 30 May 23 21:01 UTC | 30 May 23 21:01 UTC |
	| delete         | -p functional-812242                 | functional-812242           | jenkins | v1.30.1 | 30 May 23 21:01 UTC | 30 May 23 21:01 UTC |
	| start          | -p ingress-addon-legacy-208395       | ingress-addon-legacy-208395 | jenkins | v1.30.1 | 30 May 23 21:01 UTC | 30 May 23 21:03 UTC |
	|                | --kubernetes-version=v1.18.20        |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true            |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                 |                             |         |         |                     |                     |
	|                | --container-runtime=containerd       |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-208395          | ingress-addon-legacy-208395 | jenkins | v1.30.1 | 30 May 23 21:03 UTC | 30 May 23 21:03 UTC |
	|                | addons enable ingress                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-208395          | ingress-addon-legacy-208395 | jenkins | v1.30.1 | 30 May 23 21:03 UTC | 30 May 23 21:03 UTC |
	|                | addons enable ingress-dns            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5               |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-208395          | ingress-addon-legacy-208395 | jenkins | v1.30.1 | 30 May 23 21:04 UTC | 30 May 23 21:04 UTC |
	|                | ssh curl -s http://127.0.0.1/        |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'         |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-208395 ip       | ingress-addon-legacy-208395 | jenkins | v1.30.1 | 30 May 23 21:04 UTC | 30 May 23 21:04 UTC |
	| addons         | ingress-addon-legacy-208395          | ingress-addon-legacy-208395 | jenkins | v1.30.1 | 30 May 23 21:04 UTC | 30 May 23 21:04 UTC |
	|                | addons disable ingress-dns           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-208395          | ingress-addon-legacy-208395 | jenkins | v1.30.1 | 30 May 23 21:04 UTC | 30 May 23 21:04 UTC |
	|                | addons disable ingress               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1               |                             |         |         |                     |                     |
	|----------------|--------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/05/30 21:01:46
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.20.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0530 21:01:46.065494 2323591 out.go:296] Setting OutFile to fd 1 ...
	I0530 21:01:46.065648 2323591 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 21:01:46.065658 2323591 out.go:309] Setting ErrFile to fd 2...
	I0530 21:01:46.065664 2323591 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 21:01:46.065860 2323591 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16597-2288886/.minikube/bin
	I0530 21:01:46.066265 2323591 out.go:303] Setting JSON to false
	I0530 21:01:46.067256 2323591 start.go:125] hostinfo: {"hostname":"ip-172-31-31-251","uptime":175405,"bootTime":1685305101,"procs":259,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1036-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0530 21:01:46.067333 2323591 start.go:135] virtualization:  
	I0530 21:01:46.069911 2323591 out.go:177] * [ingress-addon-legacy-208395] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	I0530 21:01:46.072235 2323591 out.go:177]   - MINIKUBE_LOCATION=16597
	I0530 21:01:46.072375 2323591 notify.go:220] Checking for updates...
	I0530 21:01:46.076868 2323591 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0530 21:01:46.078845 2323591 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16597-2288886/kubeconfig
	I0530 21:01:46.080896 2323591 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16597-2288886/.minikube
	I0530 21:01:46.083173 2323591 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0530 21:01:46.085352 2323591 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0530 21:01:46.087896 2323591 driver.go:375] Setting default libvirt URI to qemu:///system
	I0530 21:01:46.113082 2323591 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0530 21:01:46.113186 2323591 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0530 21:01:46.192739 2323591 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2023-05-30 21:01:46.18245567 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1036-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0530 21:01:46.192848 2323591 docker.go:294] overlay module found
	I0530 21:01:46.195775 2323591 out.go:177] * Using the docker driver based on user configuration
	I0530 21:01:46.198130 2323591 start.go:295] selected driver: docker
	I0530 21:01:46.198174 2323591 start.go:870] validating driver "docker" against <nil>
	I0530 21:01:46.198188 2323591 start.go:881] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0530 21:01:46.199019 2323591 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0530 21:01:46.259850 2323591 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2023-05-30 21:01:46.249907135 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1036-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0530 21:01:46.259961 2323591 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0530 21:01:46.260198 2323591 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0530 21:01:46.262685 2323591 out.go:177] * Using Docker driver with root privileges
	I0530 21:01:46.264925 2323591 cni.go:84] Creating CNI manager for ""
	I0530 21:01:46.264942 2323591 cni.go:142] "docker" driver + "containerd" runtime found, recommending kindnet
	I0530 21:01:46.264959 2323591 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0530 21:01:46.264975 2323591 start_flags.go:319] config:
	{Name:ingress-addon-legacy-208395 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-208395 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0530 21:01:46.269278 2323591 out.go:177] * Starting control plane node ingress-addon-legacy-208395 in cluster ingress-addon-legacy-208395
	I0530 21:01:46.271686 2323591 cache.go:122] Beginning downloading kic base image for docker with containerd
	I0530 21:01:46.274344 2323591 out.go:177] * Pulling base image ...
	I0530 21:01:46.276544 2323591 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I0530 21:01:46.276627 2323591 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 in local docker daemon
	I0530 21:01:46.295869 2323591 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 in local docker daemon, skipping pull
	I0530 21:01:46.295920 2323591 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 exists in daemon, skipping load
	I0530 21:01:46.361976 2323591 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4
	I0530 21:01:46.362003 2323591 cache.go:57] Caching tarball of preloaded images
	I0530 21:01:46.362823 2323591 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I0530 21:01:46.365977 2323591 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0530 21:01:46.368722 2323591 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 ...
	I0530 21:01:46.491538 2323591 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4?checksum=md5:9e505be2989b8c051b1372c317471064 -> /home/jenkins/minikube-integration/16597-2288886/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4
	I0530 21:02:08.696945 2323591 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 ...
	I0530 21:02:08.697047 2323591 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/16597-2288886/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 ...
	I0530 21:02:09.829129 2323591 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on containerd
	I0530 21:02:09.829518 2323591 profile.go:148] Saving config to /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/config.json ...
	I0530 21:02:09.829552 2323591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/config.json: {Name:mk0ed2af346d8ba61f5786ec527b15b812919e80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 21:02:09.829736 2323591 cache.go:195] Successfully downloaded all kic artifacts
	I0530 21:02:09.829791 2323591 start.go:364] acquiring machines lock for ingress-addon-legacy-208395: {Name:mkb169545f58d4a9cd109a0b174fb4d4619f219e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 21:02:09.831234 2323591 start.go:368] acquired machines lock for "ingress-addon-legacy-208395" in 1.426154ms
	I0530 21:02:09.831269 2323591 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-208395 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-208395 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0530 21:02:09.831357 2323591 start.go:125] createHost starting for "" (driver="docker")
	I0530 21:02:09.834396 2323591 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0530 21:02:09.834615 2323591 start.go:159] libmachine.API.Create for "ingress-addon-legacy-208395" (driver="docker")
	I0530 21:02:09.834642 2323591 client.go:168] LocalClient.Create starting
	I0530 21:02:09.834729 2323591 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16597-2288886/.minikube/certs/ca.pem
	I0530 21:02:09.834769 2323591 main.go:141] libmachine: Decoding PEM data...
	I0530 21:02:09.834784 2323591 main.go:141] libmachine: Parsing certificate...
	I0530 21:02:09.834841 2323591 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16597-2288886/.minikube/certs/cert.pem
	I0530 21:02:09.834863 2323591 main.go:141] libmachine: Decoding PEM data...
	I0530 21:02:09.834878 2323591 main.go:141] libmachine: Parsing certificate...
	I0530 21:02:09.835246 2323591 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-208395 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0530 21:02:09.853238 2323591 cli_runner.go:211] docker network inspect ingress-addon-legacy-208395 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0530 21:02:09.853348 2323591 network_create.go:281] running [docker network inspect ingress-addon-legacy-208395] to gather additional debugging logs...
	I0530 21:02:09.853369 2323591 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-208395
	W0530 21:02:09.876342 2323591 cli_runner.go:211] docker network inspect ingress-addon-legacy-208395 returned with exit code 1
	I0530 21:02:09.876378 2323591 network_create.go:284] error running [docker network inspect ingress-addon-legacy-208395]: docker network inspect ingress-addon-legacy-208395: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-208395 not found
	I0530 21:02:09.876393 2323591 network_create.go:286] output of [docker network inspect ingress-addon-legacy-208395]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-208395 not found
	
	** /stderr **
	I0530 21:02:09.876470 2323591 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0530 21:02:09.896851 2323591 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000ea08b0}
	I0530 21:02:09.896892 2323591 network_create.go:123] attempt to create docker network ingress-addon-legacy-208395 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0530 21:02:09.896950 2323591 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-208395 ingress-addon-legacy-208395
	I0530 21:02:09.973450 2323591 network_create.go:107] docker network ingress-addon-legacy-208395 192.168.49.0/24 created
	I0530 21:02:09.973484 2323591 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-208395" container
	I0530 21:02:09.973564 2323591 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0530 21:02:09.991523 2323591 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-208395 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-208395 --label created_by.minikube.sigs.k8s.io=true
	I0530 21:02:10.013054 2323591 oci.go:103] Successfully created a docker volume ingress-addon-legacy-208395
	I0530 21:02:10.013144 2323591 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-208395-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-208395 --entrypoint /usr/bin/test -v ingress-addon-legacy-208395:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 -d /var/lib
	I0530 21:02:11.770223 2323591 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-208395-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-208395 --entrypoint /usr/bin/test -v ingress-addon-legacy-208395:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 -d /var/lib: (1.757037686s)
	I0530 21:02:11.770253 2323591 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-208395
	I0530 21:02:11.770271 2323591 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I0530 21:02:11.770290 2323591 kic.go:190] Starting extracting preloaded images to volume ...
	I0530 21:02:11.770383 2323591 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16597-2288886/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-208395:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 -I lz4 -xf /preloaded.tar -C /extractDir
	I0530 21:02:16.666737 2323591 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16597-2288886/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-208395:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.896308909s)
	I0530 21:02:16.666768 2323591 kic.go:199] duration metric: took 4.896474 seconds to extract preloaded images to volume
	W0530 21:02:16.666915 2323591 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0530 21:02:16.667024 2323591 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0530 21:02:16.735190 2323591 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-208395 --name ingress-addon-legacy-208395 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-208395 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-208395 --network ingress-addon-legacy-208395 --ip 192.168.49.2 --volume ingress-addon-legacy-208395:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8
	I0530 21:02:17.082260 2323591 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-208395 --format={{.State.Running}}
	I0530 21:02:17.110486 2323591 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-208395 --format={{.State.Status}}
	I0530 21:02:17.142665 2323591 cli_runner.go:164] Run: docker exec ingress-addon-legacy-208395 stat /var/lib/dpkg/alternatives/iptables
	I0530 21:02:17.233493 2323591 oci.go:144] the created container "ingress-addon-legacy-208395" has a running status.
	I0530 21:02:17.233547 2323591 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16597-2288886/.minikube/machines/ingress-addon-legacy-208395/id_rsa...
	I0530 21:02:17.462160 2323591 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16597-2288886/.minikube/machines/ingress-addon-legacy-208395/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0530 21:02:17.462227 2323591 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16597-2288886/.minikube/machines/ingress-addon-legacy-208395/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0530 21:02:17.500461 2323591 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-208395 --format={{.State.Status}}
	I0530 21:02:17.527797 2323591 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0530 21:02:17.527815 2323591 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-208395 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0530 21:02:17.608579 2323591 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-208395 --format={{.State.Status}}
	I0530 21:02:17.645468 2323591 machine.go:88] provisioning docker machine ...
	I0530 21:02:17.645498 2323591 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-208395"
	I0530 21:02:17.645569 2323591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-208395
	I0530 21:02:17.675053 2323591 main.go:141] libmachine: Using SSH client type: native
	I0530 21:02:17.675528 2323591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 40961 <nil> <nil>}
	I0530 21:02:17.675542 2323591 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-208395 && echo "ingress-addon-legacy-208395" | sudo tee /etc/hostname
	I0530 21:02:17.676341 2323591 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54966->127.0.0.1:40961: read: connection reset by peer
	I0530 21:02:20.820513 2323591 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-208395
	
	I0530 21:02:20.820667 2323591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-208395
	I0530 21:02:20.840130 2323591 main.go:141] libmachine: Using SSH client type: native
	I0530 21:02:20.840562 2323591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 40961 <nil> <nil>}
	I0530 21:02:20.840580 2323591 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-208395' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-208395/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-208395' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0530 21:02:20.970948 2323591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0530 21:02:20.970973 2323591 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16597-2288886/.minikube CaCertPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16597-2288886/.minikube}
	I0530 21:02:20.970992 2323591 ubuntu.go:177] setting up certificates
	I0530 21:02:20.971001 2323591 provision.go:83] configureAuth start
	I0530 21:02:20.971062 2323591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-208395
	I0530 21:02:20.990664 2323591 provision.go:138] copyHostCerts
	I0530 21:02:20.990724 2323591 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16597-2288886/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16597-2288886/.minikube/ca.pem
	I0530 21:02:20.990762 2323591 exec_runner.go:144] found /home/jenkins/minikube-integration/16597-2288886/.minikube/ca.pem, removing ...
	I0530 21:02:20.990773 2323591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16597-2288886/.minikube/ca.pem
	I0530 21:02:20.990856 2323591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16597-2288886/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16597-2288886/.minikube/ca.pem (1078 bytes)
	I0530 21:02:20.991001 2323591 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16597-2288886/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16597-2288886/.minikube/cert.pem
	I0530 21:02:20.991023 2323591 exec_runner.go:144] found /home/jenkins/minikube-integration/16597-2288886/.minikube/cert.pem, removing ...
	I0530 21:02:20.991030 2323591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16597-2288886/.minikube/cert.pem
	I0530 21:02:20.991063 2323591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16597-2288886/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16597-2288886/.minikube/cert.pem (1123 bytes)
	I0530 21:02:20.991111 2323591 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16597-2288886/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16597-2288886/.minikube/key.pem
	I0530 21:02:20.991130 2323591 exec_runner.go:144] found /home/jenkins/minikube-integration/16597-2288886/.minikube/key.pem, removing ...
	I0530 21:02:20.991137 2323591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16597-2288886/.minikube/key.pem
	I0530 21:02:20.991169 2323591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16597-2288886/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16597-2288886/.minikube/key.pem (1679 bytes)
	I0530 21:02:20.991224 2323591 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16597-2288886/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16597-2288886/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16597-2288886/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-208395 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-208395]
	I0530 21:02:21.796871 2323591 provision.go:172] copyRemoteCerts
	I0530 21:02:21.796942 2323591 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0530 21:02:21.796989 2323591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-208395
	I0530 21:02:21.816762 2323591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40961 SSHKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/machines/ingress-addon-legacy-208395/id_rsa Username:docker}
	I0530 21:02:21.912465 2323591 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16597-2288886/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0530 21:02:21.912523 2323591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16597-2288886/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0530 21:02:21.941640 2323591 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16597-2288886/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0530 21:02:21.941708 2323591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16597-2288886/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0530 21:02:21.971416 2323591 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16597-2288886/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0530 21:02:21.971477 2323591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16597-2288886/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0530 21:02:22.002319 2323591 provision.go:86] duration metric: configureAuth took 1.03130337s
	I0530 21:02:22.002350 2323591 ubuntu.go:193] setting minikube options for container-runtime
	I0530 21:02:22.002597 2323591 config.go:182] Loaded profile config "ingress-addon-legacy-208395": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.18.20
	I0530 21:02:22.002611 2323591 machine.go:91] provisioned docker machine in 4.357125675s
	I0530 21:02:22.002619 2323591 client.go:171] LocalClient.Create took 12.1679688s
	I0530 21:02:22.002641 2323591 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-208395" took 12.168034227s
	I0530 21:02:22.002654 2323591 start.go:300] post-start starting for "ingress-addon-legacy-208395" (driver="docker")
	I0530 21:02:22.002660 2323591 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0530 21:02:22.002733 2323591 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0530 21:02:22.002780 2323591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-208395
	I0530 21:02:22.022420 2323591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40961 SSHKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/machines/ingress-addon-legacy-208395/id_rsa Username:docker}
	I0530 21:02:22.116458 2323591 ssh_runner.go:195] Run: cat /etc/os-release
	I0530 21:02:22.120839 2323591 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0530 21:02:22.120923 2323591 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0530 21:02:22.120941 2323591 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0530 21:02:22.120948 2323591 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0530 21:02:22.120957 2323591 filesync.go:126] Scanning /home/jenkins/minikube-integration/16597-2288886/.minikube/addons for local assets ...
	I0530 21:02:22.121016 2323591 filesync.go:126] Scanning /home/jenkins/minikube-integration/16597-2288886/.minikube/files for local assets ...
	I0530 21:02:22.121107 2323591 filesync.go:149] local asset: /home/jenkins/minikube-integration/16597-2288886/.minikube/files/etc/ssl/certs/22942922.pem -> 22942922.pem in /etc/ssl/certs
	I0530 21:02:22.121123 2323591 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16597-2288886/.minikube/files/etc/ssl/certs/22942922.pem -> /etc/ssl/certs/22942922.pem
	I0530 21:02:22.121237 2323591 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0530 21:02:22.132295 2323591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16597-2288886/.minikube/files/etc/ssl/certs/22942922.pem --> /etc/ssl/certs/22942922.pem (1708 bytes)
	I0530 21:02:22.163612 2323591 start.go:303] post-start completed in 160.943288ms
	I0530 21:02:22.164022 2323591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-208395
	I0530 21:02:22.183024 2323591 profile.go:148] Saving config to /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/config.json ...
	I0530 21:02:22.183305 2323591 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0530 21:02:22.183359 2323591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-208395
	I0530 21:02:22.202587 2323591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40961 SSHKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/machines/ingress-addon-legacy-208395/id_rsa Username:docker}
	I0530 21:02:22.292328 2323591 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0530 21:02:22.298688 2323591 start.go:128] duration metric: createHost completed in 12.467316421s
	I0530 21:02:22.298712 2323591 start.go:83] releasing machines lock for "ingress-addon-legacy-208395", held for 12.467458295s
	I0530 21:02:22.298783 2323591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-208395
	I0530 21:02:22.317821 2323591 ssh_runner.go:195] Run: cat /version.json
	I0530 21:02:22.317841 2323591 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0530 21:02:22.317885 2323591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-208395
	I0530 21:02:22.317917 2323591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-208395
	I0530 21:02:22.339550 2323591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40961 SSHKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/machines/ingress-addon-legacy-208395/id_rsa Username:docker}
	I0530 21:02:22.347499 2323591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40961 SSHKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/machines/ingress-addon-legacy-208395/id_rsa Username:docker}
	I0530 21:02:22.430072 2323591 ssh_runner.go:195] Run: systemctl --version
	I0530 21:02:22.574849 2323591 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0530 21:02:22.581190 2323591 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0530 21:02:22.612520 2323591 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0530 21:02:22.612601 2323591 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0530 21:02:22.649421 2323591 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0530 21:02:22.649446 2323591 start.go:481] detecting cgroup driver to use...
	I0530 21:02:22.649480 2323591 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0530 21:02:22.649539 2323591 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0530 21:02:22.665508 2323591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0530 21:02:22.679704 2323591 docker.go:193] disabling cri-docker service (if available) ...
	I0530 21:02:22.679816 2323591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0530 21:02:22.695967 2323591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0530 21:02:22.713227 2323591 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0530 21:02:22.803739 2323591 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0530 21:02:22.910717 2323591 docker.go:209] disabling docker service ...
	I0530 21:02:22.910835 2323591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0530 21:02:22.936435 2323591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0530 21:02:22.953141 2323591 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0530 21:02:23.057890 2323591 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0530 21:02:23.160777 2323591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0530 21:02:23.175291 2323591 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0530 21:02:23.196533 2323591 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0530 21:02:23.210311 2323591 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0530 21:02:23.224477 2323591 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0530 21:02:23.224552 2323591 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0530 21:02:23.239319 2323591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0530 21:02:23.254373 2323591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0530 21:02:23.268382 2323591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0530 21:02:23.282645 2323591 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0530 21:02:23.294880 2323591 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0530 21:02:23.308203 2323591 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0530 21:02:23.319415 2323591 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0530 21:02:23.330410 2323591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0530 21:02:23.432904 2323591 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0530 21:02:23.517796 2323591 start.go:528] Will wait 60s for socket path /run/containerd/containerd.sock
	I0530 21:02:23.517923 2323591 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0530 21:02:23.522952 2323591 start.go:549] Will wait 60s for crictl version
	I0530 21:02:23.523078 2323591 ssh_runner.go:195] Run: which crictl
	I0530 21:02:23.527903 2323591 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0530 21:02:23.577847 2323591 start.go:565] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.21
	RuntimeApiVersion:  v1
	I0530 21:02:23.578020 2323591 ssh_runner.go:195] Run: containerd --version
	I0530 21:02:23.607957 2323591 ssh_runner.go:195] Run: containerd --version
	I0530 21:02:23.644653 2323591 out.go:177] * Preparing Kubernetes v1.18.20 on containerd 1.6.21 ...
	I0530 21:02:23.646571 2323591 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-208395 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0530 21:02:23.666722 2323591 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0530 21:02:23.671873 2323591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0530 21:02:23.687247 2323591 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I0530 21:02:23.687328 2323591 ssh_runner.go:195] Run: sudo crictl images --output json
	I0530 21:02:23.743145 2323591 containerd.go:600] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0530 21:02:23.743220 2323591 ssh_runner.go:195] Run: which lz4
	I0530 21:02:23.748176 2323591 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16597-2288886/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0530 21:02:23.748310 2323591 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0530 21:02:23.753284 2323591 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0530 21:02:23.753388 2323591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16597-2288886/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (489149349 bytes)
	I0530 21:02:25.952807 2323591 containerd.go:547] Took 2.204551 seconds to copy over tarball
	I0530 21:02:25.952876 2323591 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0530 21:02:28.723716 2323591 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.770807868s)
	I0530 21:02:28.723807 2323591 containerd.go:554] Took 2.770976 seconds to extract the tarball
	I0530 21:02:28.723826 2323591 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0530 21:02:28.811239 2323591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0530 21:02:28.920478 2323591 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0530 21:02:29.008026 2323591 ssh_runner.go:195] Run: sudo crictl images --output json
	I0530 21:02:29.066766 2323591 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0530 21:02:29.066869 2323591 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0530 21:02:29.066930 2323591 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0530 21:02:29.067115 2323591 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0530 21:02:29.066873 2323591 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0530 21:02:29.067117 2323591 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0530 21:02:29.067290 2323591 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0530 21:02:29.067368 2323591 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0530 21:02:29.067132 2323591 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0530 21:02:29.068888 2323591 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0530 21:02:29.068963 2323591 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0530 21:02:29.069016 2323591 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0530 21:02:29.069218 2323591 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0530 21:02:29.069291 2323591 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0530 21:02:29.068897 2323591 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0530 21:02:29.069390 2323591 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0530 21:02:29.069702 2323591 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0530 21:02:29.510063 2323591 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/pause:3.2"
	W0530 21:02:29.524442 2323591 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0530 21:02:29.524641 2323591 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-controller-manager:v1.18.20"
	W0530 21:02:29.533911 2323591 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0530 21:02:29.534189 2323591 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-proxy:v1.18.20"
	W0530 21:02:29.539504 2323591 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0530 21:02:29.539697 2323591 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-apiserver:v1.18.20"
	W0530 21:02:29.548095 2323591 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I0530 21:02:29.548316 2323591 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/etcd:3.4.3-0"
	W0530 21:02:29.553733 2323591 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0530 21:02:29.553919 2323591 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-scheduler:v1.18.20"
	W0530 21:02:29.559248 2323591 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I0530 21:02:29.559441 2323591 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/coredns:1.6.7"
	W0530 21:02:29.732620 2323591 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0530 21:02:29.732737 2323591 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
	I0530 21:02:29.835195 2323591 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I0530 21:02:29.835246 2323591 cri.go:217] Removing image: registry.k8s.io/pause:3.2
	I0530 21:02:29.835309 2323591 ssh_runner.go:195] Run: which crictl
	I0530 21:02:30.457622 2323591 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I0530 21:02:30.457706 2323591 cri.go:217] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0530 21:02:30.457767 2323591 ssh_runner.go:195] Run: which crictl
	I0530 21:02:30.484139 2323591 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I0530 21:02:30.484220 2323591 cri.go:217] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0530 21:02:30.484293 2323591 ssh_runner.go:195] Run: which crictl
	I0530 21:02:30.484388 2323591 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I0530 21:02:30.484430 2323591 cri.go:217] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0530 21:02:30.484468 2323591 ssh_runner.go:195] Run: which crictl
	I0530 21:02:30.493384 2323591 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I0530 21:02:30.493460 2323591 cri.go:217] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0530 21:02:30.493511 2323591 ssh_runner.go:195] Run: which crictl
	I0530 21:02:30.493584 2323591 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I0530 21:02:30.493608 2323591 cri.go:217] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0530 21:02:30.493641 2323591 ssh_runner.go:195] Run: which crictl
	I0530 21:02:30.493699 2323591 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I0530 21:02:30.493718 2323591 cri.go:217] Removing image: registry.k8s.io/coredns:1.6.7
	I0530 21:02:30.493738 2323591 ssh_runner.go:195] Run: which crictl
	I0530 21:02:30.493800 2323591 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0530 21:02:30.493821 2323591 cri.go:217] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0530 21:02:30.493843 2323591 ssh_runner.go:195] Run: which crictl
	I0530 21:02:30.493918 2323591 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0530 21:02:30.493971 2323591 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0530 21:02:30.498075 2323591 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0530 21:02:30.499302 2323591 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0530 21:02:30.613239 2323591 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16597-2288886/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0530 21:02:30.613345 2323591 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16597-2288886/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I0530 21:02:30.613441 2323591 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0530 21:02:30.613504 2323591 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0530 21:02:30.613548 2323591 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0530 21:02:30.613605 2323591 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0530 21:02:30.619751 2323591 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16597-2288886/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I0530 21:02:30.619869 2323591 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16597-2288886/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I0530 21:02:30.740198 2323591 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16597-2288886/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I0530 21:02:30.740227 2323591 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16597-2288886/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0530 21:02:30.740283 2323591 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16597-2288886/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I0530 21:02:30.740367 2323591 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16597-2288886/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I0530 21:02:30.740401 2323591 cache_images.go:92] LoadImages completed in 1.673575555s
	W0530 21:02:30.740541 2323591 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/16597-2288886/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20: no such file or directory
	I0530 21:02:30.740603 2323591 ssh_runner.go:195] Run: sudo crictl info
	I0530 21:02:30.787346 2323591 cni.go:84] Creating CNI manager for ""
	I0530 21:02:30.787372 2323591 cni.go:142] "docker" driver + "containerd" runtime found, recommending kindnet
	I0530 21:02:30.787387 2323591 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0530 21:02:30.787406 2323591 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-208395 NodeName:ingress-addon-legacy-208395 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/ce
rts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0530 21:02:30.787549 2323591 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "ingress-addon-legacy-208395"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0530 21:02:30.787654 2323591 kubeadm.go:971] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=ingress-addon-legacy-208395 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-208395 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0530 21:02:30.787730 2323591 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0530 21:02:30.799548 2323591 binaries.go:44] Found k8s binaries, skipping transfer
	I0530 21:02:30.799628 2323591 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0530 21:02:30.812168 2323591 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (448 bytes)
	I0530 21:02:30.834781 2323591 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0530 21:02:30.856976 2323591 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2131 bytes)
	I0530 21:02:30.878637 2323591 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0530 21:02:30.883210 2323591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0530 21:02:30.896854 2323591 certs.go:56] Setting up /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395 for IP: 192.168.49.2
	I0530 21:02:30.896886 2323591 certs.go:190] acquiring lock for shared ca certs: {Name:mkef74d64a59002b998e67685a207d5c04604358 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 21:02:30.897073 2323591 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16597-2288886/.minikube/ca.key
	I0530 21:02:30.897124 2323591 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16597-2288886/.minikube/proxy-client-ca.key
	I0530 21:02:30.897176 2323591 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/client.key
	I0530 21:02:30.897188 2323591 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/client.crt with IP's: []
	I0530 21:02:31.193692 2323591 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/client.crt ...
	I0530 21:02:31.193725 2323591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/client.crt: {Name:mk3b38c81edc05fd51cd82cbf6a499bc1a8de17a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 21:02:31.193939 2323591 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/client.key ...
	I0530 21:02:31.193961 2323591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/client.key: {Name:mk526979642c27ce6e16998646625b26dd05edd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 21:02:31.194471 2323591 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/apiserver.key.dd3b5fb2
	I0530 21:02:31.194493 2323591 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0530 21:02:31.693309 2323591 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/apiserver.crt.dd3b5fb2 ...
	I0530 21:02:31.693340 2323591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/apiserver.crt.dd3b5fb2: {Name:mkba64ee547b547468303a6c720b8440d439843f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 21:02:31.693949 2323591 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/apiserver.key.dd3b5fb2 ...
	I0530 21:02:31.693965 2323591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/apiserver.key.dd3b5fb2: {Name:mke8549794dafa4a996fe08fee5d90611e210246 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 21:02:31.694046 2323591 certs.go:337] copying /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/apiserver.crt
	I0530 21:02:31.694119 2323591 certs.go:341] copying /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/apiserver.key
	I0530 21:02:31.694176 2323591 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/proxy-client.key
	I0530 21:02:31.694192 2323591 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/proxy-client.crt with IP's: []
	I0530 21:02:32.019154 2323591 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/proxy-client.crt ...
	I0530 21:02:32.019185 2323591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/proxy-client.crt: {Name:mk33b2da01e9dd9b4d897d1bdeb9e0d5f4ebafb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 21:02:32.019752 2323591 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/proxy-client.key ...
	I0530 21:02:32.019771 2323591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/proxy-client.key: {Name:mk03dbdadc3197c34f9cd29f7ba3ba2df2126aa5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 21:02:32.019863 2323591 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0530 21:02:32.019883 2323591 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0530 21:02:32.019900 2323591 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0530 21:02:32.019920 2323591 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0530 21:02:32.019936 2323591 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16597-2288886/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0530 21:02:32.019955 2323591 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16597-2288886/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0530 21:02:32.019969 2323591 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16597-2288886/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0530 21:02:32.020004 2323591 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16597-2288886/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0530 21:02:32.020065 2323591 certs.go:437] found cert: /home/jenkins/minikube-integration/16597-2288886/.minikube/certs/home/jenkins/minikube-integration/16597-2288886/.minikube/certs/2294292.pem (1338 bytes)
	W0530 21:02:32.020105 2323591 certs.go:433] ignoring /home/jenkins/minikube-integration/16597-2288886/.minikube/certs/home/jenkins/minikube-integration/16597-2288886/.minikube/certs/2294292_empty.pem, impossibly tiny 0 bytes
	I0530 21:02:32.020119 2323591 certs.go:437] found cert: /home/jenkins/minikube-integration/16597-2288886/.minikube/certs/home/jenkins/minikube-integration/16597-2288886/.minikube/certs/ca-key.pem (1675 bytes)
	I0530 21:02:32.020154 2323591 certs.go:437] found cert: /home/jenkins/minikube-integration/16597-2288886/.minikube/certs/home/jenkins/minikube-integration/16597-2288886/.minikube/certs/ca.pem (1078 bytes)
	I0530 21:02:32.020185 2323591 certs.go:437] found cert: /home/jenkins/minikube-integration/16597-2288886/.minikube/certs/home/jenkins/minikube-integration/16597-2288886/.minikube/certs/cert.pem (1123 bytes)
	I0530 21:02:32.020219 2323591 certs.go:437] found cert: /home/jenkins/minikube-integration/16597-2288886/.minikube/certs/home/jenkins/minikube-integration/16597-2288886/.minikube/certs/key.pem (1679 bytes)
	I0530 21:02:32.020268 2323591 certs.go:437] found cert: /home/jenkins/minikube-integration/16597-2288886/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16597-2288886/.minikube/files/etc/ssl/certs/22942922.pem (1708 bytes)
	I0530 21:02:32.020302 2323591 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16597-2288886/.minikube/files/etc/ssl/certs/22942922.pem -> /usr/share/ca-certificates/22942922.pem
	I0530 21:02:32.020324 2323591 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16597-2288886/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0530 21:02:32.020338 2323591 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16597-2288886/.minikube/certs/2294292.pem -> /usr/share/ca-certificates/2294292.pem
	I0530 21:02:32.020926 2323591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0530 21:02:32.051231 2323591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0530 21:02:32.080473 2323591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0530 21:02:32.110055 2323591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0530 21:02:32.140526 2323591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16597-2288886/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0530 21:02:32.169385 2323591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16597-2288886/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0530 21:02:32.198272 2323591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16597-2288886/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0530 21:02:32.227537 2323591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16597-2288886/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0530 21:02:32.256954 2323591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16597-2288886/.minikube/files/etc/ssl/certs/22942922.pem --> /usr/share/ca-certificates/22942922.pem (1708 bytes)
	I0530 21:02:32.287036 2323591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16597-2288886/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0530 21:02:32.316470 2323591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16597-2288886/.minikube/certs/2294292.pem --> /usr/share/ca-certificates/2294292.pem (1338 bytes)
	I0530 21:02:32.347273 2323591 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0530 21:02:32.372185 2323591 ssh_runner.go:195] Run: openssl version
	I0530 21:02:32.379906 2323591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22942922.pem && ln -fs /usr/share/ca-certificates/22942922.pem /etc/ssl/certs/22942922.pem"
	I0530 21:02:32.392559 2323591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22942922.pem
	I0530 21:02:32.397420 2323591 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 May 30 20:58 /usr/share/ca-certificates/22942922.pem
	I0530 21:02:32.397495 2323591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22942922.pem
	I0530 21:02:32.406375 2323591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/22942922.pem /etc/ssl/certs/3ec20f2e.0"
	I0530 21:02:32.419071 2323591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0530 21:02:32.431740 2323591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0530 21:02:32.437064 2323591 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 May 30 20:51 /usr/share/ca-certificates/minikubeCA.pem
	I0530 21:02:32.437156 2323591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0530 21:02:32.446568 2323591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0530 21:02:32.459002 2323591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2294292.pem && ln -fs /usr/share/ca-certificates/2294292.pem /etc/ssl/certs/2294292.pem"
	I0530 21:02:32.471473 2323591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2294292.pem
	I0530 21:02:32.476352 2323591 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 May 30 20:58 /usr/share/ca-certificates/2294292.pem
	I0530 21:02:32.476439 2323591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2294292.pem
	I0530 21:02:32.485901 2323591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2294292.pem /etc/ssl/certs/51391683.0"
	I0530 21:02:32.497936 2323591 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0530 21:02:32.502528 2323591 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0530 21:02:32.502608 2323591 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-208395 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-208395 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0530 21:02:32.502702 2323591 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0530 21:02:32.502768 2323591 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0530 21:02:32.546002 2323591 cri.go:88] found id: ""
	I0530 21:02:32.546119 2323591 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0530 21:02:32.557759 2323591 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0530 21:02:32.569628 2323591 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0530 21:02:32.569704 2323591 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0530 21:02:32.581142 2323591 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0530 21:02:32.581208 2323591 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0530 21:02:32.643821 2323591 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0530 21:02:32.643917 2323591 kubeadm.go:322] [preflight] Running pre-flight checks
	I0530 21:02:32.699676 2323591 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0530 21:02:32.699833 2323591 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1036-aws
	I0530 21:02:32.699874 2323591 kubeadm.go:322] OS: Linux
	I0530 21:02:32.699984 2323591 kubeadm.go:322] CGROUPS_CPU: enabled
	I0530 21:02:32.700064 2323591 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0530 21:02:32.700141 2323591 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0530 21:02:32.700213 2323591 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0530 21:02:32.700282 2323591 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0530 21:02:32.700363 2323591 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0530 21:02:32.795545 2323591 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0530 21:02:32.795663 2323591 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0530 21:02:32.795778 2323591 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0530 21:02:33.048765 2323591 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0530 21:02:33.050428 2323591 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0530 21:02:33.050782 2323591 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0530 21:02:33.168360 2323591 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0530 21:02:33.171765 2323591 out.go:204]   - Generating certificates and keys ...
	I0530 21:02:33.171859 2323591 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0530 21:02:33.171939 2323591 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0530 21:02:33.781081 2323591 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0530 21:02:34.102910 2323591 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0530 21:02:34.483752 2323591 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0530 21:02:34.797744 2323591 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0530 21:02:35.010669 2323591 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0530 21:02:35.010856 2323591 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-208395 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0530 21:02:35.700594 2323591 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0530 21:02:35.701270 2323591 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-208395 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0530 21:02:35.938360 2323591 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0530 21:02:36.969122 2323591 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0530 21:02:37.383515 2323591 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0530 21:02:37.383858 2323591 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0530 21:02:37.571515 2323591 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0530 21:02:37.800440 2323591 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0530 21:02:38.012938 2323591 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0530 21:02:38.350850 2323591 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0530 21:02:38.351856 2323591 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0530 21:02:38.354735 2323591 out.go:204]   - Booting up control plane ...
	I0530 21:02:38.354837 2323591 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0530 21:02:38.369725 2323591 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0530 21:02:38.369803 2323591 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0530 21:02:38.369892 2323591 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0530 21:02:38.372310 2323591 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0530 21:02:51.374936 2323591 kubeadm.go:322] [apiclient] All control plane components are healthy after 13.002921 seconds
	I0530 21:02:51.375082 2323591 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0530 21:02:51.393543 2323591 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0530 21:02:51.917004 2323591 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0530 21:02:51.917154 2323591 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-208395 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0530 21:02:52.425969 2323591 kubeadm.go:322] [bootstrap-token] Using token: act8kv.q4g4lkoqlls1uh4o
	I0530 21:02:52.428446 2323591 out.go:204]   - Configuring RBAC rules ...
	I0530 21:02:52.428562 2323591 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0530 21:02:52.434709 2323591 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0530 21:02:52.444680 2323591 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0530 21:02:52.451907 2323591 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0530 21:02:52.460274 2323591 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0530 21:02:52.464409 2323591 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0530 21:02:52.496235 2323591 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0530 21:02:52.793974 2323591 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0530 21:02:52.850269 2323591 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0530 21:02:52.851981 2323591 kubeadm.go:322] 
	I0530 21:02:52.852052 2323591 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0530 21:02:52.852065 2323591 kubeadm.go:322] 
	I0530 21:02:52.852138 2323591 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0530 21:02:52.852148 2323591 kubeadm.go:322] 
	I0530 21:02:52.852173 2323591 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0530 21:02:52.852232 2323591 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0530 21:02:52.852281 2323591 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0530 21:02:52.852290 2323591 kubeadm.go:322] 
	I0530 21:02:52.852339 2323591 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0530 21:02:52.852413 2323591 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0530 21:02:52.852492 2323591 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0530 21:02:52.852502 2323591 kubeadm.go:322] 
	I0530 21:02:52.852582 2323591 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0530 21:02:52.852663 2323591 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0530 21:02:52.852678 2323591 kubeadm.go:322] 
	I0530 21:02:52.852758 2323591 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token act8kv.q4g4lkoqlls1uh4o \
	I0530 21:02:52.852865 2323591 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:ff077636a4006c51f7456795481b97b5286c2b636cefd4a65a893c56dd417d66 \
	I0530 21:02:52.852893 2323591 kubeadm.go:322]     --control-plane 
	I0530 21:02:52.852901 2323591 kubeadm.go:322] 
	I0530 21:02:52.852981 2323591 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0530 21:02:52.852991 2323591 kubeadm.go:322] 
	I0530 21:02:52.853068 2323591 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token act8kv.q4g4lkoqlls1uh4o \
	I0530 21:02:52.853169 2323591 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:ff077636a4006c51f7456795481b97b5286c2b636cefd4a65a893c56dd417d66 
	I0530 21:02:52.861476 2323591 kubeadm.go:322] W0530 21:02:32.643022    1103 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0530 21:02:52.861705 2323591 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1036-aws\n", err: exit status 1
	I0530 21:02:52.861808 2323591 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0530 21:02:52.861945 2323591 kubeadm.go:322] W0530 21:02:38.365517    1103 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0530 21:02:52.862065 2323591 kubeadm.go:322] W0530 21:02:38.367152    1103 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0530 21:02:52.862083 2323591 cni.go:84] Creating CNI manager for ""
	I0530 21:02:52.862092 2323591 cni.go:142] "docker" driver + "containerd" runtime found, recommending kindnet
	I0530 21:02:52.864875 2323591 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0530 21:02:52.867413 2323591 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0530 21:02:52.883130 2323591 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I0530 21:02:52.883154 2323591 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0530 21:02:52.918619 2323591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0530 21:02:53.440370 2323591 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0530 21:02:53.440503 2323591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 21:02:53.440599 2323591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=6d0d5d534b34391ed9438fcde26494d33a798fae minikube.k8s.io/name=ingress-addon-legacy-208395 minikube.k8s.io/updated_at=2023_05_30T21_02_53_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 21:02:53.629532 2323591 ops.go:34] apiserver oom_adj: -16
	I0530 21:02:53.629621 2323591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 21:02:54.252183 2323591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 21:02:54.752163 2323591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 21:02:55.252280 2323591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 21:02:55.752170 2323591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 21:02:56.251687 2323591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 21:02:56.752595 2323591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 21:02:57.252341 2323591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 21:02:57.751686 2323591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 21:02:58.251598 2323591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 21:02:58.751628 2323591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 21:02:59.252319 2323591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 21:02:59.752495 2323591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 21:03:00.251726 2323591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 21:03:00.752172 2323591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 21:03:01.252143 2323591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 21:03:01.751667 2323591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 21:03:02.252500 2323591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 21:03:02.751752 2323591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 21:03:03.252516 2323591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 21:03:03.751603 2323591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 21:03:04.251727 2323591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 21:03:04.752147 2323591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 21:03:05.252536 2323591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 21:03:05.752181 2323591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 21:03:06.252554 2323591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 21:03:06.751780 2323591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 21:03:07.252066 2323591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0530 21:03:07.538276 2323591 kubeadm.go:1076] duration metric: took 14.097814535s to wait for elevateKubeSystemPrivileges.
	I0530 21:03:07.538335 2323591 kubeadm.go:406] StartCluster complete in 35.035754463s
	I0530 21:03:07.538369 2323591 settings.go:142] acquiring lock: {Name:mkdbeb66ef6240a2ca39c4b606ba49055796e4d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 21:03:07.538497 2323591 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16597-2288886/kubeconfig
	I0530 21:03:07.539451 2323591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16597-2288886/kubeconfig: {Name:mk0fdfd8357f1362eedcc9930d50aa3f3a348d95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 21:03:07.540482 2323591 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0530 21:03:07.541073 2323591 config.go:182] Loaded profile config "ingress-addon-legacy-208395": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.18.20
	I0530 21:03:07.541204 2323591 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0530 21:03:07.541284 2323591 addons.go:66] Setting storage-provisioner=true in profile "ingress-addon-legacy-208395"
	I0530 21:03:07.541366 2323591 addons.go:228] Setting addon storage-provisioner=true in "ingress-addon-legacy-208395"
	I0530 21:03:07.541417 2323591 host.go:66] Checking if "ingress-addon-legacy-208395" exists ...
	I0530 21:03:07.542009 2323591 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-208395 --format={{.State.Status}}
	I0530 21:03:07.542219 2323591 addons.go:66] Setting default-storageclass=true in profile "ingress-addon-legacy-208395"
	I0530 21:03:07.542243 2323591 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-208395"
	I0530 21:03:07.542588 2323591 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-208395 --format={{.State.Status}}
	I0530 21:03:07.543115 2323591 kapi.go:59] client config for ingress-addon-legacy-208395: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/client.crt", KeyFile:"/home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/client.key", CAFile:"/home/jenkins/minikube-integration/16597-2288886/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13ddbe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0530 21:03:07.544882 2323591 cert_rotation.go:137] Starting client certificate rotation controller
	I0530 21:03:07.606788 2323591 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0530 21:03:07.609272 2323591 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0530 21:03:07.609294 2323591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0530 21:03:07.609501 2323591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-208395
	I0530 21:03:07.630855 2323591 kapi.go:59] client config for ingress-addon-legacy-208395: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/client.crt", KeyFile:"/home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/client.key", CAFile:"/home/jenkins/minikube-integration/16597-2288886/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13ddbe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0530 21:03:07.654632 2323591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40961 SSHKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/machines/ingress-addon-legacy-208395/id_rsa Username:docker}
	I0530 21:03:07.670789 2323591 addons.go:228] Setting addon default-storageclass=true in "ingress-addon-legacy-208395"
	I0530 21:03:07.670840 2323591 host.go:66] Checking if "ingress-addon-legacy-208395" exists ...
	I0530 21:03:07.671312 2323591 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-208395 --format={{.State.Status}}
	I0530 21:03:07.698459 2323591 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0530 21:03:07.698482 2323591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0530 21:03:07.698556 2323591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-208395
	I0530 21:03:07.737248 2323591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40961 SSHKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/machines/ingress-addon-legacy-208395/id_rsa Username:docker}
	I0530 21:03:07.928169 2323591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0530 21:03:07.962833 2323591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0530 21:03:08.018899 2323591 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0530 21:03:08.098957 2323591 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-208395" context rescaled to 1 replicas
	I0530 21:03:08.099049 2323591 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0530 21:03:08.102147 2323591 out.go:177] * Verifying Kubernetes components...
	I0530 21:03:08.104373 2323591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0530 21:03:08.898331 2323591 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0530 21:03:08.892622 2323591 start.go:916] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0530 21:03:08.893228 2323591 kapi.go:59] client config for ingress-addon-legacy-208395: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/client.crt", KeyFile:"/home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/client.key", CAFile:"/home/jenkins/minikube-integration/16597-2288886/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13ddbe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0530 21:03:08.901711 2323591 addons.go:499] enable addons completed in 1.360500386s: enabled=[default-storageclass storage-provisioner]
	I0530 21:03:08.898720 2323591 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-208395" to be "Ready" ...
	I0530 21:03:08.907924 2323591 node_ready.go:49] node "ingress-addon-legacy-208395" has status "Ready":"True"
	I0530 21:03:08.907953 2323591 node_ready.go:38] duration metric: took 6.186024ms waiting for node "ingress-addon-legacy-208395" to be "Ready" ...
	I0530 21:03:08.907963 2323591 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0530 21:03:08.918325 2323591 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-srs89" in "kube-system" namespace to be "Ready" ...
	I0530 21:03:10.940825 2323591 pod_ready.go:102] pod "coredns-66bff467f8-srs89" in "kube-system" namespace has status "Ready":"False"
	I0530 21:03:12.943015 2323591 pod_ready.go:102] pod "coredns-66bff467f8-srs89" in "kube-system" namespace has status "Ready":"False"
	I0530 21:03:15.439397 2323591 pod_ready.go:102] pod "coredns-66bff467f8-srs89" in "kube-system" namespace has status "Ready":"False"
	I0530 21:03:17.440077 2323591 pod_ready.go:102] pod "coredns-66bff467f8-srs89" in "kube-system" namespace has status "Ready":"False"
	I0530 21:03:19.939892 2323591 pod_ready.go:102] pod "coredns-66bff467f8-srs89" in "kube-system" namespace has status "Ready":"False"
	I0530 21:03:22.439603 2323591 pod_ready.go:102] pod "coredns-66bff467f8-srs89" in "kube-system" namespace has status "Ready":"False"
	I0530 21:03:24.440399 2323591 pod_ready.go:102] pod "coredns-66bff467f8-srs89" in "kube-system" namespace has status "Ready":"False"
	I0530 21:03:26.448467 2323591 pod_ready.go:102] pod "coredns-66bff467f8-srs89" in "kube-system" namespace has status "Ready":"False"
	I0530 21:03:26.940535 2323591 pod_ready.go:92] pod "coredns-66bff467f8-srs89" in "kube-system" namespace has status "Ready":"True"
	I0530 21:03:26.940564 2323591 pod_ready.go:81] duration metric: took 18.02220098s waiting for pod "coredns-66bff467f8-srs89" in "kube-system" namespace to be "Ready" ...
	I0530 21:03:26.940576 2323591 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-208395" in "kube-system" namespace to be "Ready" ...
	I0530 21:03:26.945859 2323591 pod_ready.go:92] pod "etcd-ingress-addon-legacy-208395" in "kube-system" namespace has status "Ready":"True"
	I0530 21:03:26.945885 2323591 pod_ready.go:81] duration metric: took 5.300854ms waiting for pod "etcd-ingress-addon-legacy-208395" in "kube-system" namespace to be "Ready" ...
	I0530 21:03:26.945902 2323591 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-208395" in "kube-system" namespace to be "Ready" ...
	I0530 21:03:26.951255 2323591 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-208395" in "kube-system" namespace has status "Ready":"True"
	I0530 21:03:26.951281 2323591 pod_ready.go:81] duration metric: took 5.371098ms waiting for pod "kube-apiserver-ingress-addon-legacy-208395" in "kube-system" namespace to be "Ready" ...
	I0530 21:03:26.951293 2323591 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-208395" in "kube-system" namespace to be "Ready" ...
	I0530 21:03:26.956507 2323591 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-208395" in "kube-system" namespace has status "Ready":"True"
	I0530 21:03:26.956534 2323591 pod_ready.go:81] duration metric: took 5.232727ms waiting for pod "kube-controller-manager-ingress-addon-legacy-208395" in "kube-system" namespace to be "Ready" ...
	I0530 21:03:26.956547 2323591 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-l96lv" in "kube-system" namespace to be "Ready" ...
	I0530 21:03:26.961915 2323591 pod_ready.go:92] pod "kube-proxy-l96lv" in "kube-system" namespace has status "Ready":"True"
	I0530 21:03:26.961953 2323591 pod_ready.go:81] duration metric: took 5.399036ms waiting for pod "kube-proxy-l96lv" in "kube-system" namespace to be "Ready" ...
	I0530 21:03:26.961966 2323591 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-208395" in "kube-system" namespace to be "Ready" ...
	I0530 21:03:27.134292 2323591 request.go:628] Waited for 172.225ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-208395
	I0530 21:03:27.334335 2323591 request.go:628] Waited for 197.19797ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-208395
	I0530 21:03:27.337262 2323591 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-208395" in "kube-system" namespace has status "Ready":"True"
	I0530 21:03:27.337353 2323591 pod_ready.go:81] duration metric: took 375.347663ms waiting for pod "kube-scheduler-ingress-addon-legacy-208395" in "kube-system" namespace to be "Ready" ...
	I0530 21:03:27.337383 2323591 pod_ready.go:38] duration metric: took 18.42940916s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0530 21:03:27.337406 2323591 api_server.go:52] waiting for apiserver process to appear ...
	I0530 21:03:27.337472 2323591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0530 21:03:27.351797 2323591 api_server.go:72] duration metric: took 19.252682988s to wait for apiserver process to appear ...
	I0530 21:03:27.351823 2323591 api_server.go:88] waiting for apiserver healthz status ...
	I0530 21:03:27.351840 2323591 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0530 21:03:27.361255 2323591 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0530 21:03:27.362358 2323591 api_server.go:141] control plane version: v1.18.20
	I0530 21:03:27.362383 2323591 api_server.go:131] duration metric: took 10.55256ms to wait for apiserver health ...
	I0530 21:03:27.362392 2323591 system_pods.go:43] waiting for kube-system pods to appear ...
	I0530 21:03:27.535030 2323591 request.go:628] Waited for 172.482689ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0530 21:03:27.541798 2323591 system_pods.go:59] 8 kube-system pods found
	I0530 21:03:27.541838 2323591 system_pods.go:61] "coredns-66bff467f8-srs89" [e4996f9b-e13a-4701-a7b8-f03fee52d0f0] Running
	I0530 21:03:27.541845 2323591 system_pods.go:61] "etcd-ingress-addon-legacy-208395" [9272e1aa-4645-4501-b2ff-ec025f35fd36] Running
	I0530 21:03:27.541852 2323591 system_pods.go:61] "kindnet-qq9kh" [dda489ee-ddd2-4f12-902d-fd3ddab428cc] Running
	I0530 21:03:27.541857 2323591 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-208395" [489e1695-9ef0-43cf-bd15-e7c7bc78f345] Running
	I0530 21:03:27.541886 2323591 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-208395" [08e16c5a-8594-423e-b015-427e0e51e5a8] Running
	I0530 21:03:27.541892 2323591 system_pods.go:61] "kube-proxy-l96lv" [ddff7f58-c360-492a-b6b0-393f68b5b9a3] Running
	I0530 21:03:27.541904 2323591 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-208395" [47d47363-baaf-4338-afc2-67d0c302e5de] Running
	I0530 21:03:27.541909 2323591 system_pods.go:61] "storage-provisioner" [e05d5c73-7aa0-4b40-a071-56fc9df025fe] Running
	I0530 21:03:27.541926 2323591 system_pods.go:74] duration metric: took 179.528611ms to wait for pod list to return data ...
	I0530 21:03:27.541941 2323591 default_sa.go:34] waiting for default service account to be created ...
	I0530 21:03:27.735002 2323591 request.go:628] Waited for 192.973463ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0530 21:03:27.737739 2323591 default_sa.go:45] found service account: "default"
	I0530 21:03:27.737769 2323591 default_sa.go:55] duration metric: took 195.814638ms for default service account to be created ...
	I0530 21:03:27.737780 2323591 system_pods.go:116] waiting for k8s-apps to be running ...
	I0530 21:03:27.935229 2323591 request.go:628] Waited for 197.345456ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0530 21:03:27.941210 2323591 system_pods.go:86] 8 kube-system pods found
	I0530 21:03:27.941243 2323591 system_pods.go:89] "coredns-66bff467f8-srs89" [e4996f9b-e13a-4701-a7b8-f03fee52d0f0] Running
	I0530 21:03:27.941251 2323591 system_pods.go:89] "etcd-ingress-addon-legacy-208395" [9272e1aa-4645-4501-b2ff-ec025f35fd36] Running
	I0530 21:03:27.941256 2323591 system_pods.go:89] "kindnet-qq9kh" [dda489ee-ddd2-4f12-902d-fd3ddab428cc] Running
	I0530 21:03:27.941261 2323591 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-208395" [489e1695-9ef0-43cf-bd15-e7c7bc78f345] Running
	I0530 21:03:27.941267 2323591 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-208395" [08e16c5a-8594-423e-b015-427e0e51e5a8] Running
	I0530 21:03:27.941276 2323591 system_pods.go:89] "kube-proxy-l96lv" [ddff7f58-c360-492a-b6b0-393f68b5b9a3] Running
	I0530 21:03:27.941283 2323591 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-208395" [47d47363-baaf-4338-afc2-67d0c302e5de] Running
	I0530 21:03:27.941294 2323591 system_pods.go:89] "storage-provisioner" [e05d5c73-7aa0-4b40-a071-56fc9df025fe] Running
	I0530 21:03:27.941328 2323591 system_pods.go:126] duration metric: took 203.543271ms to wait for k8s-apps to be running ...
	I0530 21:03:27.941339 2323591 system_svc.go:44] waiting for kubelet service to be running ....
	I0530 21:03:27.941395 2323591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0530 21:03:27.955686 2323591 system_svc.go:56] duration metric: took 14.33739ms WaitForService to wait for kubelet.
	I0530 21:03:27.955716 2323591 kubeadm.go:581] duration metric: took 19.856607904s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0530 21:03:27.955753 2323591 node_conditions.go:102] verifying NodePressure condition ...
	I0530 21:03:28.135189 2323591 request.go:628] Waited for 179.367202ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0530 21:03:28.138547 2323591 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0530 21:03:28.138586 2323591 node_conditions.go:123] node cpu capacity is 2
	I0530 21:03:28.138599 2323591 node_conditions.go:105] duration metric: took 182.839845ms to run NodePressure ...
	I0530 21:03:28.138611 2323591 start.go:228] waiting for startup goroutines ...
	I0530 21:03:28.138618 2323591 start.go:233] waiting for cluster config update ...
	I0530 21:03:28.138628 2323591 start.go:242] writing updated cluster config ...
	I0530 21:03:28.138965 2323591 ssh_runner.go:195] Run: rm -f paused
	I0530 21:03:28.199621 2323591 start.go:568] kubectl: 1.27.2, cluster: 1.18.20 (minor skew: 9)
	I0530 21:03:28.202040 2323591 out.go:177] 
	W0530 21:03:28.204459 2323591 out.go:239] ! /usr/local/bin/kubectl is version 1.27.2, which may have incompatibilities with Kubernetes 1.18.20.
	I0530 21:03:28.206679 2323591 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0530 21:03:28.209009 2323591 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-208395" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	a4208213ad6fd       13753a81eccfd       9 seconds ago        Exited              hello-world-app           2                   6d86fda41e35b       hello-world-app-5f5d8b66bb-rvplv
	65a8f9fd7e3be       5ee47dcca7543       35 seconds ago       Running             nginx                     0                   ff2ccfdbc10e6       nginx
	5452bb18a285b       d7f0cba3aa5bf       57 seconds ago       Exited              controller                0                   211835cc7753c       ingress-nginx-controller-7fcf777cb7-bb25b
	2151f13c932ef       a883f7fc35610       About a minute ago   Exited              patch                     0                   a490e86b4a29c       ingress-nginx-admission-patch-2zrnm
	87fd955f013d7       a883f7fc35610       About a minute ago   Exited              create                    0                   07aa7aedc7ec5       ingress-nginx-admission-create-mpbgg
	067a4da04caff       6e17ba78cf3eb       About a minute ago   Running             coredns                   0                   fb195a6e3e05e       coredns-66bff467f8-srs89
	812b33a4e140a       ba04bb24b9575       About a minute ago   Running             storage-provisioner       0                   1c6a85d6a9390       storage-provisioner
	5d11c88dbfbe4       b18bf71b941ba       About a minute ago   Running             kindnet-cni               0                   59459de11806a       kindnet-qq9kh
	6586ff14be43b       565297bc6f7d4       About a minute ago   Running             kube-proxy                0                   84a50330881ef       kube-proxy-l96lv
	16771db0749ac       2694cf044d665       About a minute ago   Running             kube-apiserver            0                   6a120c154c3e3       kube-apiserver-ingress-addon-legacy-208395
	4f7da19e1cc29       68a4fac29a865       About a minute ago   Running             kube-controller-manager   0                   37c2c446ce36a       kube-controller-manager-ingress-addon-legacy-208395
	aa5c8ccbe036d       ab707b0a0ea33       About a minute ago   Running             etcd                      0                   d69a9b5f9f274       etcd-ingress-addon-legacy-208395
	babf37be4d472       095f37015706d       About a minute ago   Running             kube-scheduler            0                   004a90ddf7306       kube-scheduler-ingress-addon-legacy-208395
	
	* 
	* ==> containerd <==
	* May 30 21:04:25 ingress-addon-legacy-208395 containerd[818]: time="2023-05-30T21:04:25.763676620Z" level=info msg="RemoveContainer for \"932197c6ff93ae71487df3ae5862e39248805ecdf2c6587debbf521560d33503\" returns successfully"
	May 30 21:04:27 ingress-addon-legacy-208395 containerd[818]: time="2023-05-30T21:04:27.187526869Z" level=info msg="StopContainer for \"5452bb18a285b3206f9619daf35f338de87bec8d8d08bf290fead46ce92538bf\" with timeout 2 (s)"
	May 30 21:04:27 ingress-addon-legacy-208395 containerd[818]: time="2023-05-30T21:04:27.187932519Z" level=info msg="Stop container \"5452bb18a285b3206f9619daf35f338de87bec8d8d08bf290fead46ce92538bf\" with signal terminated"
	May 30 21:04:27 ingress-addon-legacy-208395 containerd[818]: time="2023-05-30T21:04:27.201618461Z" level=info msg="StopContainer for \"5452bb18a285b3206f9619daf35f338de87bec8d8d08bf290fead46ce92538bf\" with timeout 2 (s)"
	May 30 21:04:27 ingress-addon-legacy-208395 containerd[818]: time="2023-05-30T21:04:27.202557144Z" level=info msg="Skipping the sending of signal terminated to container \"5452bb18a285b3206f9619daf35f338de87bec8d8d08bf290fead46ce92538bf\" because a prior stop with timeout>0 request already sent the signal"
	May 30 21:04:29 ingress-addon-legacy-208395 containerd[818]: time="2023-05-30T21:04:29.198908035Z" level=info msg="Kill container \"5452bb18a285b3206f9619daf35f338de87bec8d8d08bf290fead46ce92538bf\""
	May 30 21:04:29 ingress-addon-legacy-208395 containerd[818]: time="2023-05-30T21:04:29.203808359Z" level=info msg="Kill container \"5452bb18a285b3206f9619daf35f338de87bec8d8d08bf290fead46ce92538bf\""
	May 30 21:04:29 ingress-addon-legacy-208395 containerd[818]: time="2023-05-30T21:04:29.288027186Z" level=info msg="shim disconnected" id=5452bb18a285b3206f9619daf35f338de87bec8d8d08bf290fead46ce92538bf
	May 30 21:04:29 ingress-addon-legacy-208395 containerd[818]: time="2023-05-30T21:04:29.288092794Z" level=warning msg="cleaning up after shim disconnected" id=5452bb18a285b3206f9619daf35f338de87bec8d8d08bf290fead46ce92538bf namespace=k8s.io
	May 30 21:04:29 ingress-addon-legacy-208395 containerd[818]: time="2023-05-30T21:04:29.288103747Z" level=info msg="cleaning up dead shim"
	May 30 21:04:29 ingress-addon-legacy-208395 containerd[818]: time="2023-05-30T21:04:29.300786924Z" level=warning msg="cleanup warnings time=\"2023-05-30T21:04:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4534 runtime=io.containerd.runc.v2\n"
	May 30 21:04:29 ingress-addon-legacy-208395 containerd[818]: time="2023-05-30T21:04:29.303540206Z" level=info msg="StopContainer for \"5452bb18a285b3206f9619daf35f338de87bec8d8d08bf290fead46ce92538bf\" returns successfully"
	May 30 21:04:29 ingress-addon-legacy-208395 containerd[818]: time="2023-05-30T21:04:29.303566355Z" level=info msg="StopContainer for \"5452bb18a285b3206f9619daf35f338de87bec8d8d08bf290fead46ce92538bf\" returns successfully"
	May 30 21:04:29 ingress-addon-legacy-208395 containerd[818]: time="2023-05-30T21:04:29.304285824Z" level=info msg="StopPodSandbox for \"211835cc7753c72532bf0d79c3ac6d6453e93e60f37dfe640495718043f186cb\""
	May 30 21:04:29 ingress-addon-legacy-208395 containerd[818]: time="2023-05-30T21:04:29.304361228Z" level=info msg="Container to stop \"5452bb18a285b3206f9619daf35f338de87bec8d8d08bf290fead46ce92538bf\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	May 30 21:04:29 ingress-addon-legacy-208395 containerd[818]: time="2023-05-30T21:04:29.304286562Z" level=info msg="StopPodSandbox for \"211835cc7753c72532bf0d79c3ac6d6453e93e60f37dfe640495718043f186cb\""
	May 30 21:04:29 ingress-addon-legacy-208395 containerd[818]: time="2023-05-30T21:04:29.304589485Z" level=info msg="Container to stop \"5452bb18a285b3206f9619daf35f338de87bec8d8d08bf290fead46ce92538bf\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	May 30 21:04:29 ingress-addon-legacy-208395 containerd[818]: time="2023-05-30T21:04:29.343327205Z" level=info msg="shim disconnected" id=211835cc7753c72532bf0d79c3ac6d6453e93e60f37dfe640495718043f186cb
	May 30 21:04:29 ingress-addon-legacy-208395 containerd[818]: time="2023-05-30T21:04:29.343409625Z" level=warning msg="cleaning up after shim disconnected" id=211835cc7753c72532bf0d79c3ac6d6453e93e60f37dfe640495718043f186cb namespace=k8s.io
	May 30 21:04:29 ingress-addon-legacy-208395 containerd[818]: time="2023-05-30T21:04:29.343422745Z" level=info msg="cleaning up dead shim"
	May 30 21:04:29 ingress-addon-legacy-208395 containerd[818]: time="2023-05-30T21:04:29.356614258Z" level=warning msg="cleanup warnings time=\"2023-05-30T21:04:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4572 runtime=io.containerd.runc.v2\n"
	May 30 21:04:29 ingress-addon-legacy-208395 containerd[818]: time="2023-05-30T21:04:29.442959819Z" level=info msg="TearDown network for sandbox \"211835cc7753c72532bf0d79c3ac6d6453e93e60f37dfe640495718043f186cb\" successfully"
	May 30 21:04:29 ingress-addon-legacy-208395 containerd[818]: time="2023-05-30T21:04:29.443139264Z" level=info msg="StopPodSandbox for \"211835cc7753c72532bf0d79c3ac6d6453e93e60f37dfe640495718043f186cb\" returns successfully"
	May 30 21:04:29 ingress-addon-legacy-208395 containerd[818]: time="2023-05-30T21:04:29.454511436Z" level=info msg="TearDown network for sandbox \"211835cc7753c72532bf0d79c3ac6d6453e93e60f37dfe640495718043f186cb\" successfully"
	May 30 21:04:29 ingress-addon-legacy-208395 containerd[818]: time="2023-05-30T21:04:29.454576486Z" level=info msg="StopPodSandbox for \"211835cc7753c72532bf0d79c3ac6d6453e93e60f37dfe640495718043f186cb\" returns successfully"
	
	* 
	* ==> coredns [067a4da04caff936ca7aabba2e18561c80b9c04461bf1cdf7ed0a5c778f3789e] <==
	* [INFO] 10.244.0.5:43018 - 4303 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000045456s
	[INFO] 10.244.0.5:43018 - 41491 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000046284s
	[INFO] 10.244.0.5:43018 - 8685 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000057608s
	[INFO] 10.244.0.5:43018 - 20228 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00007337s
	[INFO] 10.244.0.5:43018 - 22746 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00119437s
	[INFO] 10.244.0.5:43018 - 64010 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000885162s
	[INFO] 10.244.0.5:43018 - 5843 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000064582s
	[INFO] 10.244.0.5:53673 - 52175 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000249146s
	[INFO] 10.244.0.5:53673 - 7062 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000061702s
	[INFO] 10.244.0.5:53673 - 54566 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000036348s
	[INFO] 10.244.0.5:53673 - 49411 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000058986s
	[INFO] 10.244.0.5:53673 - 26381 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000078941s
	[INFO] 10.244.0.5:53673 - 21952 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000271391s
	[INFO] 10.244.0.5:53673 - 15786 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001079663s
	[INFO] 10.244.0.5:53673 - 29019 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001135457s
	[INFO] 10.244.0.5:53673 - 60830 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000069275s
	[INFO] 10.244.0.5:43160 - 18855 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00009316s
	[INFO] 10.244.0.5:43160 - 63264 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000096163s
	[INFO] 10.244.0.5:43160 - 47746 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000054079s
	[INFO] 10.244.0.5:43160 - 40789 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000058847s
	[INFO] 10.244.0.5:43160 - 60814 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000043643s
	[INFO] 10.244.0.5:43160 - 39520 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000054547s
	[INFO] 10.244.0.5:43160 - 34233 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001351677s
	[INFO] 10.244.0.5:43160 - 11028 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001535634s
	[INFO] 10.244.0.5:43160 - 58524 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000051536s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-208395
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-208395
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d0d5d534b34391ed9438fcde26494d33a798fae
	                    minikube.k8s.io/name=ingress-addon-legacy-208395
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_05_30T21_02_53_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 30 May 2023 21:02:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-208395
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 30 May 2023 21:04:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 30 May 2023 21:04:26 +0000   Tue, 30 May 2023 21:02:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 30 May 2023 21:04:26 +0000   Tue, 30 May 2023 21:02:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 30 May 2023 21:04:26 +0000   Tue, 30 May 2023 21:02:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 30 May 2023 21:04:26 +0000   Tue, 30 May 2023 21:03:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-208395
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022628Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022628Ki
	  pods:               110
	System Info:
	  Machine ID:                 53dcce1195324526a316c6268dbca0e9
	  System UUID:                248ed89e-513a-4855-88af-b5890df39413
	  Boot ID:                    c7a134eb-0be2-46e6-bcc1-b9fd815daa7a
	  Kernel Version:             5.15.0-1036-aws
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.21
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-rvplv                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39s
	  kube-system                 coredns-66bff467f8-srs89                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     88s
	  kube-system                 etcd-ingress-addon-legacy-208395                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 kindnet-qq9kh                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      88s
	  kube-system                 kube-apiserver-ingress-addon-legacy-208395             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-208395    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 kube-proxy-l96lv                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kube-system                 kube-scheduler-ingress-addon-legacy-208395             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             120Mi (1%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  Starting                 114s                 kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  114s (x5 over 114s)  kubelet     Node ingress-addon-legacy-208395 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    114s (x4 over 114s)  kubelet     Node ingress-addon-legacy-208395 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     114s (x4 over 114s)  kubelet     Node ingress-addon-legacy-208395 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  114s                 kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 99s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  99s                  kubelet     Node ingress-addon-legacy-208395 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    99s                  kubelet     Node ingress-addon-legacy-208395 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     99s                  kubelet     Node ingress-addon-legacy-208395 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  99s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                89s                  kubelet     Node ingress-addon-legacy-208395 status is now: NodeReady
	  Normal  Starting                 87s                  kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.001132] FS-Cache: O-key=[8] '2d655c0100000000'
	[  +0.000844] FS-Cache: N-cookie c=000001de [p=000001d5 fl=2 nc=0 na=1]
	[  +0.001042] FS-Cache: N-cookie d=00000000623fbe05{9p.inode} n=000000001681a3f5
	[  +0.001169] FS-Cache: N-key=[8] '2d655c0100000000'
	[  +0.003164] FS-Cache: Duplicate cookie detected
	[  +0.000757] FS-Cache: O-cookie c=000001d8 [p=000001d5 fl=226 nc=0 na=1]
	[  +0.001026] FS-Cache: O-cookie d=00000000623fbe05{9p.inode} n=000000002b6df4c5
	[  +0.001196] FS-Cache: O-key=[8] '2d655c0100000000'
	[  +0.000750] FS-Cache: N-cookie c=000001df [p=000001d5 fl=2 nc=0 na=1]
	[  +0.000992] FS-Cache: N-cookie d=00000000623fbe05{9p.inode} n=000000009d169f1a
	[  +0.001135] FS-Cache: N-key=[8] '2d655c0100000000'
	[  +2.663059] FS-Cache: Duplicate cookie detected
	[  +0.000769] FS-Cache: O-cookie c=000001d6 [p=000001d5 fl=226 nc=0 na=1]
	[  +0.001049] FS-Cache: O-cookie d=00000000623fbe05{9p.inode} n=000000006f0b306f
	[  +0.001103] FS-Cache: O-key=[8] '2c655c0100000000'
	[  +0.000946] FS-Cache: N-cookie c=000001e1 [p=000001d5 fl=2 nc=0 na=1]
	[  +0.000978] FS-Cache: N-cookie d=00000000623fbe05{9p.inode} n=00000000296365f9
	[  +0.001104] FS-Cache: N-key=[8] '2c655c0100000000'
	[  +0.306123] FS-Cache: Duplicate cookie detected
	[  +0.000834] FS-Cache: O-cookie c=000001db [p=000001d5 fl=226 nc=0 na=1]
	[  +0.001102] FS-Cache: O-cookie d=00000000623fbe05{9p.inode} n=00000000c18cc09e
	[  +0.001220] FS-Cache: O-key=[8] '32655c0100000000'
	[  +0.000802] FS-Cache: N-cookie c=000001e2 [p=000001d5 fl=2 nc=0 na=1]
	[  +0.001011] FS-Cache: N-cookie d=00000000623fbe05{9p.inode} n=00000000e985cc75
	[  +0.001183] FS-Cache: N-key=[8] '32655c0100000000'
	
	* 
	* ==> etcd [aa5c8ccbe036d961e5bfad80e2d0b3d52684e328f9369fb8640b732a83852620] <==
	* raft2023/05/30 21:02:44 INFO: aec36adc501070cc became follower at term 0
	raft2023/05/30 21:02:44 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/05/30 21:02:44 INFO: aec36adc501070cc became follower at term 1
	raft2023/05/30 21:02:44 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-05-30 21:02:44.744643 W | auth: simple token is not cryptographically signed
	2023-05-30 21:02:44.748662 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-05-30 21:02:44.750199 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-05-30 21:02:44.753022 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-05-30 21:02:44.753898 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2023/05/30 21:02:44 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-05-30 21:02:44.754300 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2023-05-30 21:02:44.754397 I | embed: listening for peers on 192.168.49.2:2380
	raft2023/05/30 21:02:45 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/05/30 21:02:45 INFO: aec36adc501070cc became candidate at term 2
	raft2023/05/30 21:02:45 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/05/30 21:02:45 INFO: aec36adc501070cc became leader at term 2
	raft2023/05/30 21:02:45 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-05-30 21:02:45.266710 I | etcdserver: published {Name:ingress-addon-legacy-208395 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-05-30 21:02:45.267047 I | embed: ready to serve client requests
	2023-05-30 21:02:45.268674 I | embed: serving client requests on 192.168.49.2:2379
	2023-05-30 21:02:45.269168 I | etcdserver: setting up the initial cluster version to 3.4
	2023-05-30 21:02:45.269576 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-05-30 21:02:45.269775 I | etcdserver/api: enabled capabilities for version 3.4
	2023-05-30 21:02:45.270089 I | embed: ready to serve client requests
	2023-05-30 21:02:45.278891 I | embed: serving client requests on 127.0.0.1:2379
	
	* 
	* ==> kernel <==
	*  21:04:35 up 2 days, 46 min,  0 users,  load average: 1.33, 1.59, 2.18
	Linux ingress-addon-legacy-208395 5.15.0-1036-aws #40~20.04.1-Ubuntu SMP Mon Apr 24 00:20:54 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [5d11c88dbfbe4beba1a7c69e5ac08cd2c219bf332f87757cf7f670b991725056] <==
	* I0530 21:03:09.922997       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0530 21:03:09.923070       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0530 21:03:09.923248       1 main.go:116] setting mtu 1500 for CNI 
	I0530 21:03:09.923284       1 main.go:146] kindnetd IP family: "ipv4"
	I0530 21:03:09.923299       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0530 21:03:10.318805       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0530 21:03:10.318896       1 main.go:227] handling current node
	I0530 21:03:20.329251       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0530 21:03:20.329286       1 main.go:227] handling current node
	I0530 21:03:30.341717       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0530 21:03:30.341745       1 main.go:227] handling current node
	I0530 21:03:40.352647       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0530 21:03:40.352674       1 main.go:227] handling current node
	I0530 21:03:50.356891       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0530 21:03:50.356921       1 main.go:227] handling current node
	I0530 21:04:00.369759       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0530 21:04:00.369789       1 main.go:227] handling current node
	I0530 21:04:10.373201       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0530 21:04:10.373227       1 main.go:227] handling current node
	I0530 21:04:20.382339       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0530 21:04:20.382369       1 main.go:227] handling current node
	I0530 21:04:30.390980       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0530 21:04:30.391013       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [16771db0749acb33c6b3881926f48a08cad513887e3d889a74d5e0ae2fddd62d] <==
	* I0530 21:02:49.489960       1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister
	I0530 21:02:49.607947       1 cache.go:39] Caches are synced for autoregister controller
	I0530 21:02:49.608385       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0530 21:02:49.608462       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0530 21:02:49.608512       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0530 21:02:49.612430       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0530 21:02:50.387587       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0530 21:02:50.387623       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0530 21:02:50.399835       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0530 21:02:50.404002       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0530 21:02:50.404216       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0530 21:02:50.838957       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0530 21:02:50.892075       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0530 21:02:51.007907       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0530 21:02:51.009098       1 controller.go:609] quota admission added evaluator for: endpoints
	I0530 21:02:51.023130       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0530 21:02:51.816173       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0530 21:02:52.780358       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0530 21:02:52.832825       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0530 21:02:56.281784       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0530 21:03:07.443701       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0530 21:03:07.639533       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0530 21:03:28.866384       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0530 21:03:55.883009       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E0530 21:04:27.204503       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	
	* 
	* ==> kube-controller-manager [4f7da19e1cc299249ddcbad9dfca569d3cf7e31db4a5f8472c3c30c5bac5bdab] <==
	* I0530 21:03:07.564970       1 shared_informer.go:230] Caches are synced for HPA 
	I0530 21:03:07.631494       1 shared_informer.go:230] Caches are synced for deployment 
	I0530 21:03:07.662984       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"0fa87c3b-1008-440c-9f59-c9010a0f5d6e", APIVersion:"apps/v1", ResourceVersion:"359", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 1
	I0530 21:03:07.707106       1 shared_informer.go:230] Caches are synced for disruption 
	I0530 21:03:07.707152       1 disruption.go:339] Sending events to api server.
	I0530 21:03:07.713564       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"663bebdb-8b96-4c29-9bf4-9f21d409669c", APIVersion:"apps/v1", ResourceVersion:"360", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-srs89
	I0530 21:03:07.718220       1 shared_informer.go:230] Caches are synced for stateful set 
	I0530 21:03:07.765337       1 shared_informer.go:230] Caches are synced for endpoint 
	I0530 21:03:07.767173       1 shared_informer.go:230] Caches are synced for resource quota 
	I0530 21:03:07.772821       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0530 21:03:07.777123       1 shared_informer.go:230] Caches are synced for attach detach 
	I0530 21:03:07.810695       1 shared_informer.go:230] Caches are synced for resource quota 
	I0530 21:03:07.814306       1 shared_informer.go:230] Caches are synced for expand 
	I0530 21:03:07.814400       1 shared_informer.go:230] Caches are synced for persistent volume 
	I0530 21:03:07.854925       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0530 21:03:07.854970       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0530 21:03:07.859146       1 shared_informer.go:230] Caches are synced for PV protection 
	I0530 21:03:28.847257       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"2a2e1a46-c116-4117-ac40-56ef83673597", APIVersion:"apps/v1", ResourceVersion:"463", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0530 21:03:28.881926       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"752c5dd5-03bf-4232-9152-67d3f87bfb80", APIVersion:"apps/v1", ResourceVersion:"464", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-bb25b
	I0530 21:03:28.911216       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"790ceaef-b64f-4fd5-8d5a-98936eda5481", APIVersion:"batch/v1", ResourceVersion:"467", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-mpbgg
	I0530 21:03:28.964068       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"2beaa5f6-658d-44e6-a3e4-bd6611bec21b", APIVersion:"batch/v1", ResourceVersion:"476", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-2zrnm
	I0530 21:03:31.583658       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"790ceaef-b64f-4fd5-8d5a-98936eda5481", APIVersion:"batch/v1", ResourceVersion:"485", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0530 21:03:31.618783       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"2beaa5f6-658d-44e6-a3e4-bd6611bec21b", APIVersion:"batch/v1", ResourceVersion:"487", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0530 21:04:07.628417       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"4c65e481-17cb-4971-b8e6-1c0143c4c2b3", APIVersion:"apps/v1", ResourceVersion:"607", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0530 21:04:07.640891       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"d60db1b4-7228-4b13-a8be-cbd69f13fe63", APIVersion:"apps/v1", ResourceVersion:"608", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-rvplv
	
	* 
	* ==> kube-proxy [6586ff14be43bed133e0a98862a3fce41bd59f81d993f068745fa7e89382d835] <==
	* W0530 21:03:08.499163       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0530 21:03:08.516707       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I0530 21:03:08.516762       1 server_others.go:186] Using iptables Proxier.
	I0530 21:03:08.517142       1 server.go:583] Version: v1.18.20
	I0530 21:03:08.536529       1 config.go:315] Starting service config controller
	I0530 21:03:08.536587       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0530 21:03:08.536892       1 config.go:133] Starting endpoints config controller
	I0530 21:03:08.536898       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0530 21:03:08.639922       1 shared_informer.go:230] Caches are synced for service config 
	I0530 21:03:08.640015       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [babf37be4d4724c1c12d3b26fd3ce04c4a5c69d0e97c91e1f1f129efebf65a07] <==
	* I0530 21:02:49.576916       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0530 21:02:49.579098       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0530 21:02:49.579386       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0530 21:02:49.579470       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0530 21:02:49.579585       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0530 21:02:49.583294       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0530 21:02:49.583576       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0530 21:02:49.583777       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0530 21:02:49.584040       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0530 21:02:49.587763       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0530 21:02:49.588038       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0530 21:02:49.588255       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0530 21:02:49.588480       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0530 21:02:49.588671       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0530 21:02:49.588847       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0530 21:02:49.589023       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0530 21:02:49.589198       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0530 21:02:50.537013       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0530 21:02:50.548023       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0530 21:02:50.550409       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0530 21:02:50.663086       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0530 21:02:50.679629       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0530 21:02:50.694843       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0530 21:02:50.701177       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0530 21:02:53.779687       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* May 30 21:04:11 ingress-addon-legacy-208395 kubelet[1634]: I0530 21:04:11.718707    1634 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 8656a941dff17cdfe602567144d1c53169ffb20d41ceb34ec616ca0c563366e9
	May 30 21:04:11 ingress-addon-legacy-208395 kubelet[1634]: I0530 21:04:11.719258    1634 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 932197c6ff93ae71487df3ae5862e39248805ecdf2c6587debbf521560d33503
	May 30 21:04:11 ingress-addon-legacy-208395 kubelet[1634]: E0530 21:04:11.719608    1634 pod_workers.go:191] Error syncing pod 8eb245f2-2e9a-44ba-a657-cbb2b9a7a997 ("hello-world-app-5f5d8b66bb-rvplv_default(8eb245f2-2e9a-44ba-a657-cbb2b9a7a997)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-rvplv_default(8eb245f2-2e9a-44ba-a657-cbb2b9a7a997)"
	May 30 21:04:12 ingress-addon-legacy-208395 kubelet[1634]: I0530 21:04:12.722524    1634 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 932197c6ff93ae71487df3ae5862e39248805ecdf2c6587debbf521560d33503
	May 30 21:04:12 ingress-addon-legacy-208395 kubelet[1634]: E0530 21:04:12.723270    1634 pod_workers.go:191] Error syncing pod 8eb245f2-2e9a-44ba-a657-cbb2b9a7a997 ("hello-world-app-5f5d8b66bb-rvplv_default(8eb245f2-2e9a-44ba-a657-cbb2b9a7a997)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-rvplv_default(8eb245f2-2e9a-44ba-a657-cbb2b9a7a997)"
	May 30 21:04:15 ingress-addon-legacy-208395 kubelet[1634]: I0530 21:04:15.463988    1634 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: d90fd59778b0055c1e3681dfd1994b90a66c92c4737cefef86d46144b4a75715
	May 30 21:04:15 ingress-addon-legacy-208395 kubelet[1634]: E0530 21:04:15.464336    1634 pod_workers.go:191] Error syncing pod 1cfd1c7e-4761-4dd0-96b6-ab463314119d ("kube-ingress-dns-minikube_kube-system(1cfd1c7e-4761-4dd0-96b6-ab463314119d)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with CrashLoopBackOff: "back-off 20s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(1cfd1c7e-4761-4dd0-96b6-ab463314119d)"
	May 30 21:04:23 ingress-addon-legacy-208395 kubelet[1634]: I0530 21:04:23.391976    1634 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-xnzm8" (UniqueName: "kubernetes.io/secret/1cfd1c7e-4761-4dd0-96b6-ab463314119d-minikube-ingress-dns-token-xnzm8") pod "1cfd1c7e-4761-4dd0-96b6-ab463314119d" (UID: "1cfd1c7e-4761-4dd0-96b6-ab463314119d")
	May 30 21:04:23 ingress-addon-legacy-208395 kubelet[1634]: I0530 21:04:23.396392    1634 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cfd1c7e-4761-4dd0-96b6-ab463314119d-minikube-ingress-dns-token-xnzm8" (OuterVolumeSpecName: "minikube-ingress-dns-token-xnzm8") pod "1cfd1c7e-4761-4dd0-96b6-ab463314119d" (UID: "1cfd1c7e-4761-4dd0-96b6-ab463314119d"). InnerVolumeSpecName "minikube-ingress-dns-token-xnzm8". PluginName "kubernetes.io/secret", VolumeGidValue ""
	May 30 21:04:23 ingress-addon-legacy-208395 kubelet[1634]: I0530 21:04:23.492331    1634 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-xnzm8" (UniqueName: "kubernetes.io/secret/1cfd1c7e-4761-4dd0-96b6-ab463314119d-minikube-ingress-dns-token-xnzm8") on node "ingress-addon-legacy-208395" DevicePath ""
	May 30 21:04:24 ingress-addon-legacy-208395 kubelet[1634]: I0530 21:04:24.745482    1634 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: d90fd59778b0055c1e3681dfd1994b90a66c92c4737cefef86d46144b4a75715
	May 30 21:04:25 ingress-addon-legacy-208395 kubelet[1634]: I0530 21:04:25.463981    1634 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 932197c6ff93ae71487df3ae5862e39248805ecdf2c6587debbf521560d33503
	May 30 21:04:25 ingress-addon-legacy-208395 kubelet[1634]: I0530 21:04:25.750723    1634 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 932197c6ff93ae71487df3ae5862e39248805ecdf2c6587debbf521560d33503
	May 30 21:04:25 ingress-addon-legacy-208395 kubelet[1634]: I0530 21:04:25.751075    1634 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: a4208213ad6fda676ccf85dc5ef0c7ec971c633573edc1ba6c810a83680814f8
	May 30 21:04:25 ingress-addon-legacy-208395 kubelet[1634]: E0530 21:04:25.751306    1634 pod_workers.go:191] Error syncing pod 8eb245f2-2e9a-44ba-a657-cbb2b9a7a997 ("hello-world-app-5f5d8b66bb-rvplv_default(8eb245f2-2e9a-44ba-a657-cbb2b9a7a997)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-rvplv_default(8eb245f2-2e9a-44ba-a657-cbb2b9a7a997)"
	May 30 21:04:27 ingress-addon-legacy-208395 kubelet[1634]: E0530 21:04:27.196299    1634 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-bb25b.176407bc9b735937", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-bb25b", UID:"70ee5bf0-22d0-4b72-a69a-42d754d11843", APIVersion:"v1", ResourceVersion:"470", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-208395"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc115b716cb208b37, ext:94469845552, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc115b716cb208b37, ext:94469845552, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-bb25b.176407bc9b735937" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	May 30 21:04:27 ingress-addon-legacy-208395 kubelet[1634]: E0530 21:04:27.209677    1634 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-bb25b.176407bc9b735937", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-bb25b", UID:"70ee5bf0-22d0-4b72-a69a-42d754d11843", APIVersion:"v1", ResourceVersion:"470", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-208395"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc115b716cb208b37, ext:94469845552, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc115b716cbf5d8e9, ext:94483824602, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-bb25b.176407bc9b735937" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	May 30 21:04:29 ingress-addon-legacy-208395 kubelet[1634]: W0530 21:04:29.763334    1634 pod_container_deletor.go:77] Container "211835cc7753c72532bf0d79c3ac6d6453e93e60f37dfe640495718043f186cb" not found in pod's containers
	May 30 21:04:31 ingress-addon-legacy-208395 kubelet[1634]: I0530 21:04:31.316209    1634 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/70ee5bf0-22d0-4b72-a69a-42d754d11843-webhook-cert") pod "70ee5bf0-22d0-4b72-a69a-42d754d11843" (UID: "70ee5bf0-22d0-4b72-a69a-42d754d11843")
	May 30 21:04:31 ingress-addon-legacy-208395 kubelet[1634]: I0530 21:04:31.316270    1634 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-p8h92" (UniqueName: "kubernetes.io/secret/70ee5bf0-22d0-4b72-a69a-42d754d11843-ingress-nginx-token-p8h92") pod "70ee5bf0-22d0-4b72-a69a-42d754d11843" (UID: "70ee5bf0-22d0-4b72-a69a-42d754d11843")
	May 30 21:04:31 ingress-addon-legacy-208395 kubelet[1634]: I0530 21:04:31.323013    1634 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70ee5bf0-22d0-4b72-a69a-42d754d11843-ingress-nginx-token-p8h92" (OuterVolumeSpecName: "ingress-nginx-token-p8h92") pod "70ee5bf0-22d0-4b72-a69a-42d754d11843" (UID: "70ee5bf0-22d0-4b72-a69a-42d754d11843"). InnerVolumeSpecName "ingress-nginx-token-p8h92". PluginName "kubernetes.io/secret", VolumeGidValue ""
	May 30 21:04:31 ingress-addon-legacy-208395 kubelet[1634]: I0530 21:04:31.323740    1634 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70ee5bf0-22d0-4b72-a69a-42d754d11843-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "70ee5bf0-22d0-4b72-a69a-42d754d11843" (UID: "70ee5bf0-22d0-4b72-a69a-42d754d11843"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	May 30 21:04:31 ingress-addon-legacy-208395 kubelet[1634]: I0530 21:04:31.416592    1634 reconciler.go:319] Volume detached for volume "ingress-nginx-token-p8h92" (UniqueName: "kubernetes.io/secret/70ee5bf0-22d0-4b72-a69a-42d754d11843-ingress-nginx-token-p8h92") on node "ingress-addon-legacy-208395" DevicePath ""
	May 30 21:04:31 ingress-addon-legacy-208395 kubelet[1634]: I0530 21:04:31.416653    1634 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/70ee5bf0-22d0-4b72-a69a-42d754d11843-webhook-cert") on node "ingress-addon-legacy-208395" DevicePath ""
	May 30 21:04:32 ingress-addon-legacy-208395 kubelet[1634]: W0530 21:04:32.470655    1634 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/70ee5bf0-22d0-4b72-a69a-42d754d11843/volumes" does not exist
	
	* 
	* ==> storage-provisioner [812b33a4e140a21d40b8c2ce80a0b6f108d4a078224d4d45b17c47828735c225] <==
	* I0530 21:03:11.203738       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0530 21:03:11.216939       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0530 21:03:11.217261       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0530 21:03:11.226209       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0530 21:03:11.226476       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-208395_5a003f27-666d-4798-b459-37647ee62054!
	I0530 21:03:11.226591       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f198c8ab-46c2-44bf-a550-68d9ab749a14", APIVersion:"v1", ResourceVersion:"404", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-208395_5a003f27-666d-4798-b459-37647ee62054 became leader
	I0530 21:03:11.327370       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-208395_5a003f27-666d-4798-b459-37647ee62054!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-208395 -n ingress-addon-legacy-208395
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-208395 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (57.79s)

                                                
                                    
x
+
TestMissingContainerUpgrade (87.1s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:321: (dbg) Run:  /tmp/minikube-v1.9.1.4112258489.exe start -p missing-upgrade-929537 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:321: (dbg) Non-zero exit: /tmp/minikube-v1.9.1.4112258489.exe start -p missing-upgrade-929537 --memory=2200 --driver=docker  --container-runtime=containerd: exit status 70 (1m6.116775096s)

                                                
                                                
-- stdout --
	! [missing-upgrade-929537] minikube v1.9.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16597-2288886/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16597-2288886/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Starting control plane node m01 in cluster missing-upgrade-929537
	* Pulling base image ...
	* Creating Kubernetes in docker container with (CPUs=2) (2 available), Memory=2200MB (7834MB available) ...
	* Deleting "missing-upgrade-929537" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (2 available), Memory=2200MB (7834MB available) ...

                                                
                                                
-- /stdout --
** stderr ** 
	* minikube 1.30.1 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.30.1
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	! StartHost failed, but will try again: creating host: create: creating: create kic node: check container "missing-upgrade-929537" running: temporary error created container "missing-upgrade-929537" is not running yet
	* 
	X Failed to start docker container. "minikube start -p missing-upgrade-929537" may fix it.: creating host: create: creating: create kic node: check container "missing-upgrade-929537" running: temporary error created container "missing-upgrade-929537" is not running yet
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:321: (dbg) Run:  /tmp/minikube-v1.9.1.4112258489.exe start -p missing-upgrade-929537 --memory=2200 --driver=docker  --container-runtime=containerd
E0530 21:25:44.070730 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/functional-812242/client.crt: no such file or directory
version_upgrade_test.go:321: (dbg) Non-zero exit: /tmp/minikube-v1.9.1.4112258489.exe start -p missing-upgrade-929537 --memory=2200 --driver=docker  --container-runtime=containerd: exit status 70 (6.83794007s)

                                                
                                                
-- stdout --
	* [missing-upgrade-929537] minikube v1.9.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16597-2288886/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16597-2288886/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-929537
	* Pulling base image ...
	* Restarting existing docker container for "missing-upgrade-929537" ...
	* Restarting existing docker container for "missing-upgrade-929537" ...

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: provision: get ssh host-port: get host-bind port 22 for "missing-upgrade-929537", output 
	template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil
	: exit status 1
	* 
	X Failed to start docker container. "minikube start -p missing-upgrade-929537" may fix it.: provision: get ssh host-port: get host-bind port 22 for "missing-upgrade-929537", output 
	template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil
	: exit status 1
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:321: (dbg) Run:  /tmp/minikube-v1.9.1.4112258489.exe start -p missing-upgrade-929537 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:321: (dbg) Non-zero exit: /tmp/minikube-v1.9.1.4112258489.exe start -p missing-upgrade-929537 --memory=2200 --driver=docker  --container-runtime=containerd: exit status 70 (6.362288254s)

                                                
                                                
-- stdout --
	* [missing-upgrade-929537] minikube v1.9.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16597-2288886/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16597-2288886/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-929537
	* Pulling base image ...
	* Restarting existing docker container for "missing-upgrade-929537" ...
	* Restarting existing docker container for "missing-upgrade-929537" ...

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: provision: get ssh host-port: get host-bind port 22 for "missing-upgrade-929537", output 
	template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil
	: exit status 1
	* 
	X Failed to start docker container. "minikube start -p missing-upgrade-929537" may fix it.: provision: get ssh host-port: get host-bind port 22 for "missing-upgrade-929537", output 
	template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil
	: exit status 1
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:327: release start failed: exit status 70
panic.go:522: *** TestMissingContainerUpgrade FAILED at 2023-05-30 21:25:55.662261095 +0000 UTC m=+2106.751816148
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-929537
helpers_test.go:235: (dbg) docker inspect missing-upgrade-929537:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bf219554c34291af5225e0acdcd38588cf2a767ddc5292b48ffcc1a58d66ab1d",
	        "Created": "2023-05-30T21:25:22.499154121Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 1,
	            "Error": "",
	            "StartedAt": "2023-05-30T21:25:55.447086516Z",
	            "FinishedAt": "2023-05-30T21:25:55.446296238Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/bf219554c34291af5225e0acdcd38588cf2a767ddc5292b48ffcc1a58d66ab1d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bf219554c34291af5225e0acdcd38588cf2a767ddc5292b48ffcc1a58d66ab1d/hostname",
	        "HostsPath": "/var/lib/docker/containers/bf219554c34291af5225e0acdcd38588cf2a767ddc5292b48ffcc1a58d66ab1d/hosts",
	        "LogPath": "/var/lib/docker/containers/bf219554c34291af5225e0acdcd38588cf2a767ddc5292b48ffcc1a58d66ab1d/bf219554c34291af5225e0acdcd38588cf2a767ddc5292b48ffcc1a58d66ab1d-json.log",
	        "Name": "/missing-upgrade-929537",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-929537:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b39fe13cd9eb144c56615bf077cc2b69746f2020726ce2a55076d6785ea3fe4f-init/diff:/var/lib/docker/overlay2/1aa49736069f95869ff0123b4b8335696d3f6cad0c74c7a77215a9e6e4bacefa/diff:/var/lib/docker/overlay2/99fe4a00b0b12f281d891bec5c531fb9ba60d0776935cc678f18d52c8ca33971/diff:/var/lib/docker/overlay2/01b021405f84016b9f55302d5cfa3a62968049744a93482f91e31b9c2bbdb7c3/diff:/var/lib/docker/overlay2/230742d18a91a8bde170242bf3f7b84436f59635b6e7b041b8491d6357addff7/diff:/var/lib/docker/overlay2/b503f1707e0078838bffaa4f04a36b7260fef240b54f91da64565ca1bc6eca85/diff:/var/lib/docker/overlay2/f7d7d824494ff06ed5cd1ccbb0f70612ff36dee946cef16696749c44f165c793/diff:/var/lib/docker/overlay2/590efd26c04e943cc72381d9eab5383ec54d78eef19a5ca3d3b381e8e8843aa3/diff:/var/lib/docker/overlay2/16539717386b8df024753f1b7ae3334af0cd540ce401993cc13420782bbd5592/diff:/var/lib/docker/overlay2/612bab31e9652b05f220a97133e5b18f554ded32b3fd48034f4de016a392fd9d/diff:/var/lib/docker/overlay2/512304
2731b0668656423016122df0c5572e7c878aa0be1925a0e8698ec8415a/diff:/var/lib/docker/overlay2/053226ce84a39c60e2ea2c4b50d725d791f0bb88834f01e45dbc2fc13bc0cc52/diff:/var/lib/docker/overlay2/9a26c9a4ead0110c7ad2f5f75cc76849659cbb3058a30836446ebbcccfe22980/diff:/var/lib/docker/overlay2/92aa0641b065561949769af8b2f61dcc22f4813b700fd305435369a7616fb56b/diff:/var/lib/docker/overlay2/f94640e17b66d07f8110d119f7da91705d9eb47823c9ebe7778ff45d2296d192/diff:/var/lib/docker/overlay2/621e533a802f034e5353143cb4f9bc3ab03b4e9976383fee3b2e0ea91cda96ce/diff:/var/lib/docker/overlay2/9db879fc8928ee8702a9f1d3f8ad6b8539584947e1130798f45e554ddeb40113/diff:/var/lib/docker/overlay2/eee4f9cb8defe2d00078a49310178a82d358f282fdd3d973e035dee16eac627d/diff:/var/lib/docker/overlay2/5114eb5b270f106992cb20f4f4bd55825fb90f210f996ae549dc321167246b85/diff:/var/lib/docker/overlay2/39ac73bdda698a3eb90c9b8022fd13d9514c6f8fa76bea976e4692ed51966178/diff:/var/lib/docker/overlay2/5c32e0a65c3e6f4f8263e2abfed0d1148a99a4321e43d03e2eb69d950d35731f/diff:/var/lib/d
ocker/overlay2/fe842cfff249a0a71e3b7a8e8061a6d26d520adccb999989e6c445f435073486/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b39fe13cd9eb144c56615bf077cc2b69746f2020726ce2a55076d6785ea3fe4f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b39fe13cd9eb144c56615bf077cc2b69746f2020726ce2a55076d6785ea3fe4f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b39fe13cd9eb144c56615bf077cc2b69746f2020726ce2a55076d6785ea3fe4f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-929537",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-929537/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-929537",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-929537",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-929537",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d7e70bcb9f44a0b1638f5b10fe5ca4cfd2ddb6a651e697d3f7c8d47cc8064d3a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {},
	            "SandboxKey": "/var/run/docker/netns/d7e70bcb9f44",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "ec1d59a6b7e6a8f7956734dd99018e1ed01d82f7e139539ca54a48d3bccf6812",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-929537 -n missing-upgrade-929537
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-929537 -n missing-upgrade-929537: exit status 7 (78.467284ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "missing-upgrade-929537" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "missing-upgrade-929537" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-929537
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-929537: (4.092137552s)
--- FAIL: TestMissingContainerUpgrade (87.10s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (886.44s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:195: (dbg) Run:  /tmp/minikube-v1.22.0.2328044688.exe start -p stopped-upgrade-708012 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:195: (dbg) Done: /tmp/minikube-v1.22.0.2328044688.exe start -p stopped-upgrade-708012 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (1m39.622792682s)
version_upgrade_test.go:204: (dbg) Run:  /tmp/minikube-v1.22.0.2328044688.exe -p stopped-upgrade-708012 stop
version_upgrade_test.go:204: (dbg) Done: /tmp/minikube-v1.22.0.2328044688.exe -p stopped-upgrade-708012 stop: (12.520760262s)
version_upgrade_test.go:210: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-708012 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0530 21:28:34.569220 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/client.crt: no such file or directory
E0530 21:28:38.505185 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/client.crt: no such file or directory
E0530 21:30:44.070649 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/functional-812242/client.crt: no such file or directory
version_upgrade_test.go:210: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p stopped-upgrade-708012 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: exit status 109 (12m53.632407147s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-708012] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16597-2288886/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16597-2288886/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.2
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-708012 in cluster stopped-upgrade-708012
	* Pulling base image ...
	* Downloading Kubernetes v1.21.2 preload ...
	* Restarting existing docker container for "stopped-upgrade-708012" ...
	* Preparing Kubernetes v1.21.2 on containerd 1.4.6 ...
	  - kubelet.cni-conf-dir=/etc/cni/net.mk
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0530 21:27:53.441250 2415330 out.go:296] Setting OutFile to fd 1 ...
	I0530 21:27:53.441866 2415330 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 21:27:53.441906 2415330 out.go:309] Setting ErrFile to fd 2...
	I0530 21:27:53.441926 2415330 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 21:27:53.442196 2415330 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16597-2288886/.minikube/bin
	I0530 21:27:53.442726 2415330 out.go:303] Setting JSON to false
	I0530 21:27:53.446642 2415330 start.go:125] hostinfo: {"hostname":"ip-172-31-31-251","uptime":176973,"bootTime":1685305101,"procs":309,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1036-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0530 21:27:53.446751 2415330 start.go:135] virtualization:  
	I0530 21:27:53.449152 2415330 out.go:177] * [stopped-upgrade-708012] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	I0530 21:27:53.451266 2415330 out.go:177]   - MINIKUBE_LOCATION=16597
	I0530 21:27:53.453091 2415330 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0530 21:27:53.451330 2415330 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/16597-2288886/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.2-containerd-overlay2-arm64.tar.lz4
	I0530 21:27:53.451372 2415330 notify.go:220] Checking for updates...
	I0530 21:27:53.456854 2415330 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16597-2288886/kubeconfig
	I0530 21:27:53.459495 2415330 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16597-2288886/.minikube
	I0530 21:27:53.461416 2415330 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0530 21:27:53.463389 2415330 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0530 21:27:53.465697 2415330 config.go:182] Loaded profile config "stopped-upgrade-708012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.2
	I0530 21:27:53.469212 2415330 out.go:177] * Kubernetes 1.27.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.2
	I0530 21:27:53.470930 2415330 driver.go:375] Setting default libvirt URI to qemu:///system
	I0530 21:27:53.498585 2415330 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0530 21:27:53.498682 2415330 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0530 21:27:53.581781 2415330 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:38 OomKillDisable:true NGoroutines:45 SystemTime:2023-05-30 21:27:53.571137491 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1036-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0530 21:27:53.581898 2415330 docker.go:294] overlay module found
	I0530 21:27:53.585607 2415330 out.go:177] * Using the docker driver based on existing profile
	I0530 21:27:53.588335 2415330 start.go:295] selected driver: docker
	I0530 21:27:53.588357 2415330 start.go:870] validating driver "docker" against &{Name:stopped-upgrade-708012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:stopped-upgrade-708012 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.21.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0530 21:27:53.588467 2415330 start.go:881] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0530 21:27:53.589917 2415330 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0530 21:27:53.653534 2415330 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:38 OomKillDisable:true NGoroutines:45 SystemTime:2023-05-30 21:27:53.64373844 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1036-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0530 21:27:53.653791 2415330 cni.go:84] Creating CNI manager for ""
	I0530 21:27:53.653809 2415330 cni.go:142] "docker" driver + "containerd" runtime found, recommending kindnet
	I0530 21:27:53.653820 2415330 start_flags.go:319] config:
	{Name:stopped-upgrade-708012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:stopped-upgrade-708012 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.21.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0530 21:27:53.656452 2415330 out.go:177] * Starting control plane node stopped-upgrade-708012 in cluster stopped-upgrade-708012
	I0530 21:27:53.658790 2415330 cache.go:122] Beginning downloading kic base image for docker with containerd
	I0530 21:27:53.661243 2415330 out.go:177] * Pulling base image ...
	I0530 21:27:53.663170 2415330 preload.go:132] Checking if preload exists for k8s version v1.21.2 and runtime containerd
	I0530 21:27:53.663346 2415330 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0530 21:27:53.683529 2415330 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0530 21:27:53.683553 2415330 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0530 21:27:53.734665 2415330 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.21.2/preloaded-images-k8s-v18-v1.21.2-containerd-overlay2-arm64.tar.lz4
	I0530 21:27:53.734688 2415330 cache.go:57] Caching tarball of preloaded images
	I0530 21:27:53.734866 2415330 preload.go:132] Checking if preload exists for k8s version v1.21.2 and runtime containerd
	I0530 21:27:53.736998 2415330 out.go:177] * Downloading Kubernetes v1.21.2 preload ...
	I0530 21:27:53.739731 2415330 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.21.2-containerd-overlay2-arm64.tar.lz4 ...
	I0530 21:27:53.870480 2415330 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.21.2/preloaded-images-k8s-v18-v1.21.2-containerd-overlay2-arm64.tar.lz4?checksum=md5:f1e1f7bdb5d08690c839f70306158850 -> /home/jenkins/minikube-integration/16597-2288886/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.21.2-containerd-overlay2-arm64.tar.lz4
	I0530 21:28:02.702282 2415330 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.21.2-containerd-overlay2-arm64.tar.lz4 ...
	I0530 21:28:02.702389 2415330 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/16597-2288886/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.21.2-containerd-overlay2-arm64.tar.lz4 ...
	I0530 21:28:04.295030 2415330 cache.go:60] Finished verifying existence of preloaded tar for  v1.21.2 on containerd
	I0530 21:28:04.295177 2415330 profile.go:148] Saving config to /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/stopped-upgrade-708012/config.json ...
	I0530 21:28:04.295407 2415330 cache.go:195] Successfully downloaded all kic artifacts
	I0530 21:28:04.295452 2415330 start.go:364] acquiring machines lock for stopped-upgrade-708012: {Name:mkcdf67b4e86c42808cdf10ead47aa2b26031f9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 21:28:04.295522 2415330 start.go:368] acquired machines lock for "stopped-upgrade-708012" in 45.431µs
	I0530 21:28:04.296373 2415330 start.go:96] Skipping create...Using existing machine configuration
	I0530 21:28:04.296403 2415330 fix.go:55] fixHost starting: 
	I0530 21:28:04.296699 2415330 cli_runner.go:164] Run: docker container inspect stopped-upgrade-708012 --format={{.State.Status}}
	I0530 21:28:04.323954 2415330 fix.go:103] recreateIfNeeded on stopped-upgrade-708012: state=Stopped err=<nil>
	W0530 21:28:04.323980 2415330 fix.go:129] unexpected machine state, will restart: <nil>
	I0530 21:28:04.328302 2415330 out.go:177] * Restarting existing docker container for "stopped-upgrade-708012" ...
	I0530 21:28:04.329958 2415330 cli_runner.go:164] Run: docker start stopped-upgrade-708012
	I0530 21:28:04.699108 2415330 cli_runner.go:164] Run: docker container inspect stopped-upgrade-708012 --format={{.State.Status}}
	I0530 21:28:04.721906 2415330 kic.go:426] container "stopped-upgrade-708012" state is running.
	I0530 21:28:04.722282 2415330 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-708012
	I0530 21:28:04.744883 2415330 profile.go:148] Saving config to /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/stopped-upgrade-708012/config.json ...
	I0530 21:28:04.745132 2415330 machine.go:88] provisioning docker machine ...
	I0530 21:28:04.746434 2415330 ubuntu.go:169] provisioning hostname "stopped-upgrade-708012"
	I0530 21:28:04.746537 2415330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-708012
	I0530 21:28:04.769968 2415330 main.go:141] libmachine: Using SSH client type: native
	I0530 21:28:04.770425 2415330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 41144 <nil> <nil>}
	I0530 21:28:04.770444 2415330 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-708012 && echo "stopped-upgrade-708012" | sudo tee /etc/hostname
	I0530 21:28:04.771160 2415330 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60676->127.0.0.1:41144: read: connection reset by peer
	I0530 21:28:07.912748 2415330 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-708012
	
	I0530 21:28:07.913540 2415330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-708012
	I0530 21:28:07.936507 2415330 main.go:141] libmachine: Using SSH client type: native
	I0530 21:28:07.936932 2415330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 41144 <nil> <nil>}
	I0530 21:28:07.936960 2415330 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-708012' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-708012/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-708012' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0530 21:28:08.062775 2415330 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0530 21:28:08.062802 2415330 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16597-2288886/.minikube CaCertPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16597-2288886/.minikube}
	I0530 21:28:08.062845 2415330 ubuntu.go:177] setting up certificates
	I0530 21:28:08.062854 2415330 provision.go:83] configureAuth start
	I0530 21:28:08.062917 2415330 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-708012
	I0530 21:28:08.081631 2415330 provision.go:138] copyHostCerts
	I0530 21:28:08.082531 2415330 exec_runner.go:144] found /home/jenkins/minikube-integration/16597-2288886/.minikube/cert.pem, removing ...
	I0530 21:28:08.083122 2415330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16597-2288886/.minikube/cert.pem
	I0530 21:28:08.083214 2415330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16597-2288886/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16597-2288886/.minikube/cert.pem (1123 bytes)
	I0530 21:28:08.084883 2415330 exec_runner.go:144] found /home/jenkins/minikube-integration/16597-2288886/.minikube/key.pem, removing ...
	I0530 21:28:08.084900 2415330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16597-2288886/.minikube/key.pem
	I0530 21:28:08.084942 2415330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16597-2288886/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16597-2288886/.minikube/key.pem (1679 bytes)
	I0530 21:28:08.085014 2415330 exec_runner.go:144] found /home/jenkins/minikube-integration/16597-2288886/.minikube/ca.pem, removing ...
	I0530 21:28:08.085023 2415330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16597-2288886/.minikube/ca.pem
	I0530 21:28:08.085047 2415330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16597-2288886/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16597-2288886/.minikube/ca.pem (1078 bytes)
	I0530 21:28:08.085102 2415330 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16597-2288886/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16597-2288886/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16597-2288886/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-708012 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-708012]
	I0530 21:28:08.479554 2415330 provision.go:172] copyRemoteCerts
	I0530 21:28:08.479622 2415330 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0530 21:28:08.479670 2415330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-708012
	I0530 21:28:08.499743 2415330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41144 SSHKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/machines/stopped-upgrade-708012/id_rsa Username:docker}
	I0530 21:28:08.592894 2415330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16597-2288886/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0530 21:28:08.619862 2415330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16597-2288886/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0530 21:28:08.645288 2415330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16597-2288886/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0530 21:28:08.671106 2415330 provision.go:86] duration metric: configureAuth took 607.800342ms
	I0530 21:28:08.671139 2415330 ubuntu.go:193] setting minikube options for container-runtime
	I0530 21:28:08.671334 2415330 config.go:182] Loaded profile config "stopped-upgrade-708012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.2
	I0530 21:28:08.671345 2415330 machine.go:91] provisioned docker machine in 3.926197961s
	I0530 21:28:08.671353 2415330 start.go:300] post-start starting for "stopped-upgrade-708012" (driver="docker")
	I0530 21:28:08.671364 2415330 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0530 21:28:08.671420 2415330 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0530 21:28:08.671481 2415330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-708012
	I0530 21:28:08.689944 2415330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41144 SSHKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/machines/stopped-upgrade-708012/id_rsa Username:docker}
	I0530 21:28:08.779229 2415330 ssh_runner.go:195] Run: cat /etc/os-release
	I0530 21:28:08.783358 2415330 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0530 21:28:08.783386 2415330 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0530 21:28:08.783398 2415330 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0530 21:28:08.783404 2415330 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0530 21:28:08.783413 2415330 filesync.go:126] Scanning /home/jenkins/minikube-integration/16597-2288886/.minikube/addons for local assets ...
	I0530 21:28:08.783476 2415330 filesync.go:126] Scanning /home/jenkins/minikube-integration/16597-2288886/.minikube/files for local assets ...
	I0530 21:28:08.783559 2415330 filesync.go:149] local asset: /home/jenkins/minikube-integration/16597-2288886/.minikube/files/etc/ssl/certs/22942922.pem -> 22942922.pem in /etc/ssl/certs
	I0530 21:28:08.783666 2415330 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0530 21:28:08.793459 2415330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16597-2288886/.minikube/files/etc/ssl/certs/22942922.pem --> /etc/ssl/certs/22942922.pem (1708 bytes)
	I0530 21:28:08.819422 2415330 start.go:303] post-start completed in 148.04884ms
	I0530 21:28:08.819543 2415330 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0530 21:28:08.819627 2415330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-708012
	I0530 21:28:08.839060 2415330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41144 SSHKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/machines/stopped-upgrade-708012/id_rsa Username:docker}
	I0530 21:28:08.927712 2415330 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0530 21:28:08.933530 2415330 fix.go:57] fixHost completed within 4.637119011s
	I0530 21:28:08.933552 2415330 start.go:83] releasing machines lock for "stopped-upgrade-708012", held for 4.638017546s
	I0530 21:28:08.933624 2415330 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-708012
	I0530 21:28:08.955197 2415330 ssh_runner.go:195] Run: cat /version.json
	I0530 21:28:08.955222 2415330 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0530 21:28:08.955252 2415330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-708012
	I0530 21:28:08.956154 2415330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-708012
	I0530 21:28:08.986912 2415330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41144 SSHKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/machines/stopped-upgrade-708012/id_rsa Username:docker}
	I0530 21:28:09.003177 2415330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41144 SSHKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/machines/stopped-upgrade-708012/id_rsa Username:docker}
	W0530 21:28:09.099065 2415330 start.go:409] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0530 21:28:09.099217 2415330 ssh_runner.go:195] Run: systemctl --version
	I0530 21:28:09.241132 2415330 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0530 21:28:09.249876 2415330 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0530 21:28:09.291541 2415330 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0530 21:28:09.291631 2415330 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0530 21:28:09.333897 2415330 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0530 21:28:09.333923 2415330 start.go:481] detecting cgroup driver to use...
	I0530 21:28:09.333955 2415330 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0530 21:28:09.334015 2415330 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0530 21:28:09.354818 2415330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0530 21:28:09.369719 2415330 docker.go:193] disabling cri-docker service (if available) ...
	I0530 21:28:09.369796 2415330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0530 21:28:09.384444 2415330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0530 21:28:09.400770 2415330 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0530 21:28:09.417256 2415330 docker.go:203] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0530 21:28:09.417403 2415330 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0530 21:28:09.550071 2415330 docker.go:209] disabling docker service ...
	I0530 21:28:09.550134 2415330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0530 21:28:09.566921 2415330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0530 21:28:09.581859 2415330 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0530 21:28:09.691040 2415330 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0530 21:28:09.808725 2415330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0530 21:28:09.822237 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0530 21:28:09.842747 2415330 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.4.1"|' /etc/containerd/config.toml"
	I0530 21:28:09.854833 2415330 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0530 21:28:09.867203 2415330 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0530 21:28:09.867289 2415330 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0530 21:28:09.878988 2415330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0530 21:28:09.892112 2415330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0530 21:28:09.903715 2415330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0530 21:28:09.915337 2415330 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0530 21:28:09.927270 2415330 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0530 21:28:09.939870 2415330 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0530 21:28:09.949960 2415330 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0530 21:28:09.960126 2415330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0530 21:28:10.066086 2415330 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0530 21:28:10.247066 2415330 start.go:528] Will wait 60s for socket path /run/containerd/containerd.sock
	I0530 21:28:10.247145 2415330 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0530 21:28:10.253496 2415330 start.go:549] Will wait 60s for crictl version
	I0530 21:28:10.253575 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:28:10.258568 2415330 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0530 21:28:10.294861 2415330 start.go:565] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.6
	RuntimeApiVersion:  v1alpha2
	I0530 21:28:10.294947 2415330 ssh_runner.go:195] Run: containerd --version
	I0530 21:28:10.334759 2415330 ssh_runner.go:195] Run: containerd --version
	I0530 21:28:10.368404 2415330 out.go:177] * Preparing Kubernetes v1.21.2 on containerd 1.4.6 ...
	I0530 21:28:10.370871 2415330 cli_runner.go:164] Run: docker network inspect stopped-upgrade-708012 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0530 21:28:10.388471 2415330 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0530 21:28:10.394256 2415330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0530 21:28:10.412305 2415330 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0530 21:28:10.414332 2415330 preload.go:132] Checking if preload exists for k8s version v1.21.2 and runtime containerd
	I0530 21:28:10.414416 2415330 ssh_runner.go:195] Run: sudo crictl images --output json
	I0530 21:28:10.449065 2415330 containerd.go:600] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.21.2". assuming images are not preloaded.
	I0530 21:28:10.449978 2415330 ssh_runner.go:195] Run: which lz4
	I0530 21:28:10.454596 2415330 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0530 21:28:10.459181 2415330 ssh_runner.go:352] existence check for /preloaded.tar.lz4: source file and destination file are different sizes
	I0530 21:28:10.459225 2415330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16597-2288886/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.21.2-containerd-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (466850159 bytes)
	I0530 21:28:13.147196 2415330 containerd.go:547] Took 2.692645 seconds to copy over tarball
	I0530 21:28:13.147317 2415330 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0530 21:28:16.439391 2415330 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.292015766s)
	I0530 21:28:16.439415 2415330 containerd.go:554] Took 3.292157 seconds to extract the tarball
	I0530 21:28:16.439424 2415330 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0530 21:28:16.508319 2415330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0530 21:28:16.650833 2415330 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0530 21:28:16.829058 2415330 ssh_runner.go:195] Run: sudo crictl images --output json
	I0530 21:28:16.927172 2415330 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.21.2 registry.k8s.io/kube-controller-manager:v1.21.2 registry.k8s.io/kube-scheduler:v1.21.2 registry.k8s.io/kube-proxy:v1.21.2 registry.k8s.io/pause:3.4.1 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns/coredns:v1.8.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0530 21:28:16.927932 2415330 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0530 21:28:16.928055 2415330 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.21.2
	I0530 21:28:16.931868 2415330 image.go:134] retrieving image: registry.k8s.io/pause:3.4.1
	I0530 21:28:16.932046 2415330 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0530 21:28:16.932188 2415330 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.0
	I0530 21:28:16.932387 2415330 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.21.2
	I0530 21:28:16.932482 2415330 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.21.2
	I0530 21:28:16.932623 2415330 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.21.2
	I0530 21:28:16.936232 2415330 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.21.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.21.2
	I0530 21:28:16.936359 2415330 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.21.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.21.2
	I0530 21:28:16.937446 2415330 image.go:177] daemon lookup for registry.k8s.io/pause:3.4.1: Error response from daemon: No such image: registry.k8s.io/pause:3.4.1
	I0530 21:28:16.937691 2415330 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0530 21:28:16.938028 2415330 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.21.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.21.2
	I0530 21:28:16.938094 2415330 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.0: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.0
	I0530 21:28:16.938146 2415330 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0530 21:28:16.938204 2415330 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.21.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.21.2
	I0530 21:28:17.415290 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-scheduler:v1.21.2"
	W0530 21:28:17.416294 2415330 image.go:265] image registry.k8s.io/etcd:3.4.13-0 arch mismatch: want arm64 got amd64. fixing
	I0530 21:28:17.416613 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/etcd:3.4.13-0"
	I0530 21:28:17.420770 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-controller-manager:v1.21.2"
	W0530 21:28:17.426296 2415330 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.0 arch mismatch: want arm64 got amd64. fixing
	I0530 21:28:17.426485 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/coredns/coredns:v1.8.0"
	I0530 21:28:17.429183 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/pause:3.4.1"
	I0530 21:28:17.439910 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-proxy:v1.21.2"
	I0530 21:28:17.480785 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-apiserver:v1.21.2"
	W0530 21:28:17.611500 2415330 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0530 21:28:17.611620 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
	I0530 21:28:18.205580 2415330 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.21.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.21.2" does not exist at hash "ed7155fa5d9d8d10e852a1894855c6e6fe12b968f3dbdd00622a9d0591360bc4" in container runtime
	I0530 21:28:18.205672 2415330 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.21.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.21.2" does not exist at hash "9a5764dc4d8eb9377dcf5315ad0220bc5686497464f0157e93ed90dc97bf1630" in container runtime
	I0530 21:28:18.206635 2415330 cri.go:217] Removing image: registry.k8s.io/kube-scheduler:v1.21.2
	I0530 21:28:18.206709 2415330 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.0" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.0" does not exist at hash "a9174f4673e21040a8819633bac9ddf5cbf9df5d0fcabcc2faef5e85862bff17" in container runtime
	I0530 21:28:18.206734 2415330 cri.go:217] Removing image: registry.k8s.io/coredns/coredns:v1.8.0
	I0530 21:28:18.206784 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:28:18.206637 2415330 cri.go:217] Removing image: registry.k8s.io/kube-controller-manager:v1.21.2
	I0530 21:28:18.206857 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:28:18.206787 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:28:18.206948 2415330 cache_images.go:116] "registry.k8s.io/pause:3.4.1" needs transfer: "registry.k8s.io/pause:3.4.1" does not exist at hash "d055819ed991a06271de68c9bc251fdc3d007c30e8166f814d0cdbd656c0d259" in container runtime
	I0530 21:28:18.205640 2415330 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "e5b5933abcfdc943a0c8ab9a7837e9a3dc3a24619ef57dca3cc4b4ea689ac2c1" in container runtime
	I0530 21:28:18.206982 2415330 cri.go:217] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0530 21:28:18.207016 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:28:18.207054 2415330 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.21.2" needs transfer: "registry.k8s.io/kube-proxy:v1.21.2" does not exist at hash "d7b0c9b45d678fc9500baf09a783a4b147cc4e0df0c71c6e2d26db3a86ac5105" in container runtime
	I0530 21:28:18.207086 2415330 cri.go:217] Removing image: registry.k8s.io/kube-proxy:v1.21.2
	I0530 21:28:18.207135 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:28:18.207068 2415330 cri.go:217] Removing image: registry.k8s.io/pause:3.4.1
	I0530 21:28:18.207264 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:28:18.264784 2415330 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.21.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.21.2" does not exist at hash "2811c599675e449c5a00a8f27a49c07b9ca53c01d607f1e3c0160a17d66863c0" in container runtime
	I0530 21:28:18.264840 2415330 cri.go:217] Removing image: registry.k8s.io/kube-apiserver:v1.21.2
	I0530 21:28:18.264887 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:28:18.273076 2415330 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0530 21:28:18.273149 2415330 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0530 21:28:18.273184 2415330 cri.go:217] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0530 21:28:18.273210 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:28:18.273259 2415330 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.21.2
	I0530 21:28:18.273334 2415330 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.21.2
	I0530 21:28:18.273411 2415330 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.4.1
	I0530 21:28:18.273469 2415330 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.0
	I0530 21:28:18.273528 2415330 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.21.2
	I0530 21:28:18.273587 2415330 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.21.2
	I0530 21:28:18.407684 2415330 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16597-2288886/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.21.2
	I0530 21:28:18.407839 2415330 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16597-2288886/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.21.2
	I0530 21:28:18.407888 2415330 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16597-2288886/.minikube/cache/images/arm64/registry.k8s.io/pause_3.4.1
	I0530 21:28:18.407929 2415330 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16597-2288886/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.0
	I0530 21:28:18.407983 2415330 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16597-2288886/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.21.2
	I0530 21:28:18.408036 2415330 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16597-2288886/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.21.2
	I0530 21:28:18.408073 2415330 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16597-2288886/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0
	I0530 21:28:18.408137 2415330 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0530 21:28:18.537389 2415330 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16597-2288886/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0530 21:28:18.537538 2415330 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0530 21:28:18.542810 2415330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16597-2288886/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0530 21:28:18.607465 2415330 containerd.go:269] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0530 21:28:18.607553 2415330 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I0530 21:28:18.956738 2415330 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16597-2288886/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0530 21:28:18.956784 2415330 cache_images.go:92] LoadImages completed in 2.029592254s
	W0530 21:28:18.956919 2415330 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/16597-2288886/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.21.2: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/16597-2288886/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.21.2: no such file or directory
	I0530 21:28:18.956992 2415330 ssh_runner.go:195] Run: sudo crictl info
	I0530 21:28:18.991806 2415330 cni.go:84] Creating CNI manager for ""
	I0530 21:28:18.991825 2415330 cni.go:142] "docker" driver + "containerd" runtime found, recommending kindnet
	I0530 21:28:18.991838 2415330 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0530 21:28:18.991856 2415330 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.21.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-708012 NodeName:stopped-upgrade-708012 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0530 21:28:18.991978 2415330 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "stopped-upgrade-708012"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0530 21:28:18.992058 2415330 kubeadm.go:971] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=stopped-upgrade-708012 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.2 ClusterName:stopped-upgrade-708012 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0530 21:28:18.992139 2415330 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.21.2
	I0530 21:28:19.002593 2415330 binaries.go:44] Found k8s binaries, skipping transfer
	I0530 21:28:19.002708 2415330 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0530 21:28:19.011982 2415330 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (473 bytes)
	I0530 21:28:19.029181 2415330 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0530 21:28:19.046556 2415330 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I0530 21:28:19.063929 2415330 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0530 21:28:19.068400 2415330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0530 21:28:19.080879 2415330 certs.go:56] Setting up /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/stopped-upgrade-708012 for IP: 192.168.76.2
	I0530 21:28:19.080906 2415330 certs.go:190] acquiring lock for shared ca certs: {Name:mkef74d64a59002b998e67685a207d5c04604358 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 21:28:19.081066 2415330 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16597-2288886/.minikube/ca.key
	I0530 21:28:19.081107 2415330 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16597-2288886/.minikube/proxy-client-ca.key
	I0530 21:28:19.081179 2415330 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/stopped-upgrade-708012/client.key
	I0530 21:28:19.081247 2415330 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/stopped-upgrade-708012/apiserver.key.31bdca25
	I0530 21:28:19.081288 2415330 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/stopped-upgrade-708012/proxy-client.key
	I0530 21:28:19.081498 2415330 certs.go:437] found cert: /home/jenkins/minikube-integration/16597-2288886/.minikube/certs/home/jenkins/minikube-integration/16597-2288886/.minikube/certs/2294292.pem (1338 bytes)
	W0530 21:28:19.081528 2415330 certs.go:433] ignoring /home/jenkins/minikube-integration/16597-2288886/.minikube/certs/home/jenkins/minikube-integration/16597-2288886/.minikube/certs/2294292_empty.pem, impossibly tiny 0 bytes
	I0530 21:28:19.081537 2415330 certs.go:437] found cert: /home/jenkins/minikube-integration/16597-2288886/.minikube/certs/home/jenkins/minikube-integration/16597-2288886/.minikube/certs/ca-key.pem (1675 bytes)
	I0530 21:28:19.081563 2415330 certs.go:437] found cert: /home/jenkins/minikube-integration/16597-2288886/.minikube/certs/home/jenkins/minikube-integration/16597-2288886/.minikube/certs/ca.pem (1078 bytes)
	I0530 21:28:19.081585 2415330 certs.go:437] found cert: /home/jenkins/minikube-integration/16597-2288886/.minikube/certs/home/jenkins/minikube-integration/16597-2288886/.minikube/certs/cert.pem (1123 bytes)
	I0530 21:28:19.081610 2415330 certs.go:437] found cert: /home/jenkins/minikube-integration/16597-2288886/.minikube/certs/home/jenkins/minikube-integration/16597-2288886/.minikube/certs/key.pem (1679 bytes)
	I0530 21:28:19.081655 2415330 certs.go:437] found cert: /home/jenkins/minikube-integration/16597-2288886/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16597-2288886/.minikube/files/etc/ssl/certs/22942922.pem (1708 bytes)
	I0530 21:28:19.082248 2415330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/stopped-upgrade-708012/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0530 21:28:19.106314 2415330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/stopped-upgrade-708012/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0530 21:28:19.129792 2415330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/stopped-upgrade-708012/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0530 21:28:19.153241 2415330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/stopped-upgrade-708012/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0530 21:28:19.178824 2415330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16597-2288886/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0530 21:28:19.203456 2415330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16597-2288886/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0530 21:28:19.228017 2415330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16597-2288886/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0530 21:28:19.252863 2415330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16597-2288886/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0530 21:28:19.277807 2415330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16597-2288886/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0530 21:28:19.304881 2415330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16597-2288886/.minikube/certs/2294292.pem --> /usr/share/ca-certificates/2294292.pem (1338 bytes)
	I0530 21:28:19.330742 2415330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16597-2288886/.minikube/files/etc/ssl/certs/22942922.pem --> /usr/share/ca-certificates/22942922.pem (1708 bytes)
	I0530 21:28:19.356474 2415330 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0530 21:28:19.375249 2415330 ssh_runner.go:195] Run: openssl version
	I0530 21:28:19.382444 2415330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2294292.pem && ln -fs /usr/share/ca-certificates/2294292.pem /etc/ssl/certs/2294292.pem"
	I0530 21:28:19.393407 2415330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2294292.pem
	I0530 21:28:19.398266 2415330 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 May 30 20:58 /usr/share/ca-certificates/2294292.pem
	I0530 21:28:19.398328 2415330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2294292.pem
	I0530 21:28:19.405766 2415330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2294292.pem /etc/ssl/certs/51391683.0"
	I0530 21:28:19.416155 2415330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22942922.pem && ln -fs /usr/share/ca-certificates/22942922.pem /etc/ssl/certs/22942922.pem"
	I0530 21:28:19.426866 2415330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22942922.pem
	I0530 21:28:19.431697 2415330 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 May 30 20:58 /usr/share/ca-certificates/22942922.pem
	I0530 21:28:19.431805 2415330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22942922.pem
	I0530 21:28:19.438913 2415330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/22942922.pem /etc/ssl/certs/3ec20f2e.0"
	I0530 21:28:19.450928 2415330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0530 21:28:19.461627 2415330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0530 21:28:19.466289 2415330 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 May 30 20:51 /usr/share/ca-certificates/minikubeCA.pem
	I0530 21:28:19.466363 2415330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0530 21:28:19.473818 2415330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0530 21:28:19.483694 2415330 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0530 21:28:19.488191 2415330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0530 21:28:19.495196 2415330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0530 21:28:19.502318 2415330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0530 21:28:19.509141 2415330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0530 21:28:19.516100 2415330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0530 21:28:19.523108 2415330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0530 21:28:19.530059 2415330 kubeadm.go:404] StartCluster: {Name:stopped-upgrade-708012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:stopped-upgrade-708012 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.21.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0530 21:28:19.530171 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0530 21:28:19.530228 2415330 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0530 21:28:19.569648 2415330 cri.go:88] found id: ""
	I0530 21:28:19.569715 2415330 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0530 21:28:19.579572 2415330 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0530 21:28:19.579592 2415330 kubeadm.go:636] restartCluster start
	I0530 21:28:19.579655 2415330 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0530 21:28:19.589113 2415330 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0530 21:28:19.589753 2415330 kubeconfig.go:135] verify returned: extract IP: "stopped-upgrade-708012" does not appear in /home/jenkins/minikube-integration/16597-2288886/kubeconfig
	I0530 21:28:19.589977 2415330 kubeconfig.go:146] "stopped-upgrade-708012" context is missing from /home/jenkins/minikube-integration/16597-2288886/kubeconfig - will repair!
	I0530 21:28:19.590396 2415330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16597-2288886/kubeconfig: {Name:mk0fdfd8357f1362eedcc9930d50aa3f3a348d95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 21:28:19.592254 2415330 kapi.go:59] client config for stopped-upgrade-708012: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/stopped-upgrade-708012/client.crt", KeyFile:"/home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/stopped-upgrade-708012/client.key", CAFile:"/home/jenkins/minikube-integration/16597-2288886/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13ddbe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0530 21:28:19.593110 2415330 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0530 21:28:19.606187 2415330 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2023-05-30 21:27:06.284173099 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2023-05-30 21:28:19.059519790 +0000
	@@ -52,6 +52,8 @@
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	 cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	@@ -68,3 +70,7 @@
	 metricsBindAddress: 0.0.0.0:10249
	 conntrack:
	   maxPerCore: 0
	+# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	+  tcpEstablishedTimeout: 0s
	+# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	+  tcpCloseWaitTimeout: 0s
	
	-- /stdout --
	I0530 21:28:19.606207 2415330 kubeadm.go:1123] stopping kube-system containers ...
	I0530 21:28:19.606219 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0530 21:28:19.606309 2415330 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0530 21:28:19.637448 2415330 cri.go:88] found id: ""
	I0530 21:28:19.637541 2415330 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0530 21:28:19.651106 2415330 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0530 21:28:19.661485 2415330 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 May 30 21:27 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 May 30 21:27 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 May 30 21:27 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 May 30 21:27 /etc/kubernetes/scheduler.conf
	
	I0530 21:28:19.661583 2415330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0530 21:28:19.671935 2415330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0530 21:28:19.682309 2415330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0530 21:28:19.692262 2415330 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0530 21:28:19.692394 2415330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0530 21:28:19.702917 2415330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0530 21:28:19.712961 2415330 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0530 21:28:19.713052 2415330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0530 21:28:19.722985 2415330 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0530 21:28:19.733460 2415330 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0530 21:28:19.733487 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0530 21:28:19.825876 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0530 21:28:22.253060 2415330 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.427106341s)
	I0530 21:28:22.253101 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0530 21:28:22.485474 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0530 21:28:22.596386 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0530 21:28:22.699020 2415330 api_server.go:52] waiting for apiserver process to appear ...
	I0530 21:28:22.699147 2415330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0530 21:28:23.214758 2415330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0530 21:28:23.714102 2415330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0530 21:28:24.214239 2415330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0530 21:28:24.714223 2415330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0530 21:28:25.214583 2415330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0530 21:28:25.715145 2415330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0530 21:28:26.214988 2415330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0530 21:28:26.714210 2415330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0530 21:28:27.214186 2415330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0530 21:28:27.714987 2415330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0530 21:28:28.214687 2415330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0530 21:28:28.715015 2415330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0530 21:28:29.214147 2415330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0530 21:28:29.715143 2415330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0530 21:28:30.214797 2415330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0530 21:28:30.714488 2415330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0530 21:28:31.215055 2415330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0530 21:28:31.714281 2415330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0530 21:28:32.214186 2415330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0530 21:28:32.714504 2415330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0530 21:28:32.749820 2415330 api_server.go:72] duration metric: took 10.05079997s to wait for apiserver process to appear ...
	I0530 21:28:32.749844 2415330 api_server.go:88] waiting for apiserver healthz status ...
	I0530 21:28:32.749862 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:28:37.753520 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0530 21:28:38.254178 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:28:43.255430 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0530 21:28:43.255473 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:28:48.256628 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0530 21:28:48.256667 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:28:53.257042 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0530 21:28:53.257112 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:28:53.756459 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:48916->192.168.76.2:8443: read: connection reset by peer
	I0530 21:28:53.756497 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:28:53.756871 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:28:54.254407 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:28:54.254826 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:28:54.754484 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:28:59.755565 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0530 21:28:59.755607 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:29:04.756643 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0530 21:29:04.756682 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:29:09.757587 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0530 21:29:09.757627 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:29:14.758790 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0530 21:29:14.758830 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:29:15.135850 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:59638->192.168.76.2:8443: read: connection reset by peer
	I0530 21:29:15.254082 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:29:15.254518 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:29:15.754026 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:29:15.754427 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:29:16.253685 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:29:16.254099 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:29:16.754402 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:29:16.754792 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:29:17.254494 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:29:17.254937 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:29:17.754478 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:29:17.754872 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:29:18.254553 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:29:18.254977 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:29:18.753728 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:29:18.754178 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:29:19.253818 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:29:19.254188 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:29:19.753952 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:29:19.754316 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:29:20.253979 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:29:20.254386 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:29:20.754051 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:29:20.754496 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:29:21.253937 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:29:21.254356 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:29:21.753686 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:29:21.754146 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:29:22.253705 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:29:22.254138 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:29:22.753682 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:29:22.754021 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:29:23.254424 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:29:23.254819 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:29:23.753910 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:29:23.754339 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:29:24.254183 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:29:24.254553 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:29:24.754272 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:29:24.754689 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:29:25.254351 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:29:25.254751 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:29:25.754467 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:29:25.754932 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:29:26.254483 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:29:26.254910 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:29:26.753752 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:29:26.754116 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:29:27.253669 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:29:27.254051 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:29:27.753674 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:29:27.754054 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:29:28.253685 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:29:28.254136 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:29:28.753868 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:29:28.754326 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:29:29.253857 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:29:29.254234 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:29:29.754073 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:29:29.754468 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:29:30.254032 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:29:30.254406 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:29:30.754089 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:29:30.754563 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:29:31.253687 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:29:31.254094 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:29:31.753735 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:29:31.754345 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:29:32.253679 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:29:32.254125 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:29:32.753746 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0530 21:29:32.753848 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0530 21:29:32.794473 2415330 cri.go:88] found id: "d8d0fa925414fc01b168f72398e6bcc5ef67f26fc5a8bb74c73d7735eb141c12"
	I0530 21:29:32.794502 2415330 cri.go:88] found id: "2d8c73f921c8dbf81b5a132696b0342ea9dfeab9e6b974c835ff102c4e00cb88"
	I0530 21:29:32.794508 2415330 cri.go:88] found id: ""
	I0530 21:29:32.794514 2415330 logs.go:284] 2 containers: [d8d0fa925414fc01b168f72398e6bcc5ef67f26fc5a8bb74c73d7735eb141c12 2d8c73f921c8dbf81b5a132696b0342ea9dfeab9e6b974c835ff102c4e00cb88]
	I0530 21:29:32.794566 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:29:32.799780 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:29:32.804756 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0530 21:29:32.804823 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0530 21:29:32.846094 2415330 cri.go:88] found id: "8b937589f5123704b39c302945ea7c253d4a5a2cf41622b46ba2957a319ba332"
	I0530 21:29:32.846126 2415330 cri.go:88] found id: ""
	I0530 21:29:32.846137 2415330 logs.go:284] 1 containers: [8b937589f5123704b39c302945ea7c253d4a5a2cf41622b46ba2957a319ba332]
	I0530 21:29:32.846234 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:29:32.853143 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0530 21:29:32.853238 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0530 21:29:32.904764 2415330 cri.go:88] found id: ""
	I0530 21:29:32.904790 2415330 logs.go:284] 0 containers: []
	W0530 21:29:32.904800 2415330 logs.go:286] No container was found matching "coredns"
	I0530 21:29:32.904818 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0530 21:29:32.904891 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0530 21:29:32.953425 2415330 cri.go:88] found id: "7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:29:32.953448 2415330 cri.go:88] found id: ""
	I0530 21:29:32.953460 2415330 logs.go:284] 1 containers: [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f]
	I0530 21:29:32.953526 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:29:32.959543 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0530 21:29:32.959624 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0530 21:29:33.064883 2415330 cri.go:88] found id: ""
	I0530 21:29:33.064904 2415330 logs.go:284] 0 containers: []
	W0530 21:29:33.064912 2415330 logs.go:286] No container was found matching "kube-proxy"
	I0530 21:29:33.064919 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0530 21:29:33.064980 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0530 21:29:33.104344 2415330 cri.go:88] found id: "6ec6d59f57c47608ec4198fb192aeebf2336ff23354bc0b6298f66b0a073e9f5"
	I0530 21:29:33.104365 2415330 cri.go:88] found id: ""
	I0530 21:29:33.104373 2415330 logs.go:284] 1 containers: [6ec6d59f57c47608ec4198fb192aeebf2336ff23354bc0b6298f66b0a073e9f5]
	I0530 21:29:33.104445 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:29:33.110817 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0530 21:29:33.110899 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0530 21:29:33.162435 2415330 cri.go:88] found id: ""
	I0530 21:29:33.162528 2415330 logs.go:284] 0 containers: []
	W0530 21:29:33.162552 2415330 logs.go:286] No container was found matching "kindnet"
	I0530 21:29:33.162571 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0530 21:29:33.162697 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0530 21:29:33.253183 2415330 cri.go:88] found id: ""
	I0530 21:29:33.253204 2415330 logs.go:284] 0 containers: []
	W0530 21:29:33.253217 2415330 logs.go:286] No container was found matching "storage-provisioner"
	I0530 21:29:33.253231 2415330 logs.go:123] Gathering logs for container status ...
	I0530 21:29:33.253242 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0530 21:29:33.317930 2415330 logs.go:123] Gathering logs for describe nodes ...
	I0530 21:29:33.317956 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0530 21:29:43.465847 2415330 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.147862304s)
	W0530 21:29:43.465884 2415330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I0530 21:29:43.465893 2415330 logs.go:123] Gathering logs for kube-apiserver [d8d0fa925414fc01b168f72398e6bcc5ef67f26fc5a8bb74c73d7735eb141c12] ...
	I0530 21:29:43.465906 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8d0fa925414fc01b168f72398e6bcc5ef67f26fc5a8bb74c73d7735eb141c12"
	I0530 21:29:43.503702 2415330 logs.go:123] Gathering logs for etcd [8b937589f5123704b39c302945ea7c253d4a5a2cf41622b46ba2957a319ba332] ...
	I0530 21:29:43.503731 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b937589f5123704b39c302945ea7c253d4a5a2cf41622b46ba2957a319ba332"
	I0530 21:29:43.535966 2415330 logs.go:123] Gathering logs for kube-controller-manager [6ec6d59f57c47608ec4198fb192aeebf2336ff23354bc0b6298f66b0a073e9f5] ...
	I0530 21:29:43.536008 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ec6d59f57c47608ec4198fb192aeebf2336ff23354bc0b6298f66b0a073e9f5"
	I0530 21:29:43.587935 2415330 logs.go:123] Gathering logs for containerd ...
	I0530 21:29:43.587969 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0530 21:29:43.643452 2415330 logs.go:123] Gathering logs for kubelet ...
	I0530 21:29:43.643486 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0530 21:29:43.721021 2415330 logs.go:123] Gathering logs for dmesg ...
	I0530 21:29:43.721056 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0530 21:29:43.743899 2415330 logs.go:123] Gathering logs for kube-apiserver [2d8c73f921c8dbf81b5a132696b0342ea9dfeab9e6b974c835ff102c4e00cb88] ...
	I0530 21:29:43.743931 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d8c73f921c8dbf81b5a132696b0342ea9dfeab9e6b974c835ff102c4e00cb88"
	I0530 21:29:43.777363 2415330 logs.go:123] Gathering logs for kube-scheduler [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f] ...
	I0530 21:29:43.777391 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:29:46.329671 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:29:51.330771 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0530 21:29:51.330821 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0530 21:29:51.330885 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0530 21:29:51.390370 2415330 cri.go:88] found id: "d8d0fa925414fc01b168f72398e6bcc5ef67f26fc5a8bb74c73d7735eb141c12"
	I0530 21:29:51.390389 2415330 cri.go:88] found id: "2d8c73f921c8dbf81b5a132696b0342ea9dfeab9e6b974c835ff102c4e00cb88"
	I0530 21:29:51.390394 2415330 cri.go:88] found id: ""
	I0530 21:29:51.390401 2415330 logs.go:284] 2 containers: [d8d0fa925414fc01b168f72398e6bcc5ef67f26fc5a8bb74c73d7735eb141c12 2d8c73f921c8dbf81b5a132696b0342ea9dfeab9e6b974c835ff102c4e00cb88]
	I0530 21:29:51.390460 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:29:51.403613 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:29:51.413764 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0530 21:29:51.413848 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0530 21:29:51.461409 2415330 cri.go:88] found id: "8b937589f5123704b39c302945ea7c253d4a5a2cf41622b46ba2957a319ba332"
	I0530 21:29:51.461428 2415330 cri.go:88] found id: ""
	I0530 21:29:51.461437 2415330 logs.go:284] 1 containers: [8b937589f5123704b39c302945ea7c253d4a5a2cf41622b46ba2957a319ba332]
	I0530 21:29:51.461496 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:29:51.473919 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0530 21:29:51.473993 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0530 21:29:51.530077 2415330 cri.go:88] found id: ""
	I0530 21:29:51.530099 2415330 logs.go:284] 0 containers: []
	W0530 21:29:51.530107 2415330 logs.go:286] No container was found matching "coredns"
	I0530 21:29:51.530113 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0530 21:29:51.530176 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0530 21:29:51.608170 2415330 cri.go:88] found id: "7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:29:51.608207 2415330 cri.go:88] found id: ""
	I0530 21:29:51.608215 2415330 logs.go:284] 1 containers: [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f]
	I0530 21:29:51.608275 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:29:51.618226 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0530 21:29:51.618313 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0530 21:29:51.753178 2415330 cri.go:88] found id: ""
	I0530 21:29:51.753202 2415330 logs.go:284] 0 containers: []
	W0530 21:29:51.753211 2415330 logs.go:286] No container was found matching "kube-proxy"
	I0530 21:29:51.753218 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0530 21:29:51.753285 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0530 21:29:51.814067 2415330 cri.go:88] found id: "dea43e60aea7baa612a6700d77173899a521044803a652fbfc2ab1269d27db62"
	I0530 21:29:51.814091 2415330 cri.go:88] found id: "6ec6d59f57c47608ec4198fb192aeebf2336ff23354bc0b6298f66b0a073e9f5"
	I0530 21:29:51.814098 2415330 cri.go:88] found id: ""
	I0530 21:29:51.814105 2415330 logs.go:284] 2 containers: [dea43e60aea7baa612a6700d77173899a521044803a652fbfc2ab1269d27db62 6ec6d59f57c47608ec4198fb192aeebf2336ff23354bc0b6298f66b0a073e9f5]
	I0530 21:29:51.814162 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:29:51.828518 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:29:51.842806 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0530 21:29:51.842877 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0530 21:29:51.901881 2415330 cri.go:88] found id: ""
	I0530 21:29:51.901907 2415330 logs.go:284] 0 containers: []
	W0530 21:29:51.901916 2415330 logs.go:286] No container was found matching "kindnet"
	I0530 21:29:51.901923 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0530 21:29:51.901988 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0530 21:29:52.026238 2415330 cri.go:88] found id: ""
	I0530 21:29:52.026264 2415330 logs.go:284] 0 containers: []
	W0530 21:29:52.026274 2415330 logs.go:286] No container was found matching "storage-provisioner"
	I0530 21:29:52.026284 2415330 logs.go:123] Gathering logs for kubelet ...
	I0530 21:29:52.026296 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0530 21:29:52.173897 2415330 logs.go:123] Gathering logs for describe nodes ...
	I0530 21:29:52.173944 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0530 21:29:55.232805 2415330 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (3.058837361s)
	W0530 21:29:55.232853 2415330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0530 21:29:55.232862 2415330 logs.go:123] Gathering logs for kube-apiserver [d8d0fa925414fc01b168f72398e6bcc5ef67f26fc5a8bb74c73d7735eb141c12] ...
	I0530 21:29:55.232900 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8d0fa925414fc01b168f72398e6bcc5ef67f26fc5a8bb74c73d7735eb141c12"
	I0530 21:29:55.283737 2415330 logs.go:123] Gathering logs for kube-apiserver [2d8c73f921c8dbf81b5a132696b0342ea9dfeab9e6b974c835ff102c4e00cb88] ...
	I0530 21:29:55.283786 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d8c73f921c8dbf81b5a132696b0342ea9dfeab9e6b974c835ff102c4e00cb88"
	W0530 21:29:55.332159 2415330 logs.go:130] failed kube-apiserver [2d8c73f921c8dbf81b5a132696b0342ea9dfeab9e6b974c835ff102c4e00cb88]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d8c73f921c8dbf81b5a132696b0342ea9dfeab9e6b974c835ff102c4e00cb88" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d8c73f921c8dbf81b5a132696b0342ea9dfeab9e6b974c835ff102c4e00cb88": Process exited with status 1
	stdout:
	
	stderr:
	E0530 21:29:55.327426    1981 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2d8c73f921c8dbf81b5a132696b0342ea9dfeab9e6b974c835ff102c4e00cb88\": not found" containerID="2d8c73f921c8dbf81b5a132696b0342ea9dfeab9e6b974c835ff102c4e00cb88"
	time="2023-05-30T21:29:55Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"2d8c73f921c8dbf81b5a132696b0342ea9dfeab9e6b974c835ff102c4e00cb88\": not found"
	 output: 
	** stderr ** 
	E0530 21:29:55.327426    1981 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2d8c73f921c8dbf81b5a132696b0342ea9dfeab9e6b974c835ff102c4e00cb88\": not found" containerID="2d8c73f921c8dbf81b5a132696b0342ea9dfeab9e6b974c835ff102c4e00cb88"
	time="2023-05-30T21:29:55Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"2d8c73f921c8dbf81b5a132696b0342ea9dfeab9e6b974c835ff102c4e00cb88\": not found"
	
	** /stderr **
	I0530 21:29:55.332215 2415330 logs.go:123] Gathering logs for kube-controller-manager [dea43e60aea7baa612a6700d77173899a521044803a652fbfc2ab1269d27db62] ...
	I0530 21:29:55.332237 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dea43e60aea7baa612a6700d77173899a521044803a652fbfc2ab1269d27db62"
	I0530 21:29:55.398377 2415330 logs.go:123] Gathering logs for container status ...
	I0530 21:29:55.398405 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0530 21:29:55.446101 2415330 logs.go:123] Gathering logs for dmesg ...
	I0530 21:29:55.446134 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0530 21:29:55.469592 2415330 logs.go:123] Gathering logs for etcd [8b937589f5123704b39c302945ea7c253d4a5a2cf41622b46ba2957a319ba332] ...
	I0530 21:29:55.469650 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b937589f5123704b39c302945ea7c253d4a5a2cf41622b46ba2957a319ba332"
	W0530 21:29:55.516031 2415330 logs.go:130] failed etcd [8b937589f5123704b39c302945ea7c253d4a5a2cf41622b46ba2957a319ba332]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b937589f5123704b39c302945ea7c253d4a5a2cf41622b46ba2957a319ba332" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b937589f5123704b39c302945ea7c253d4a5a2cf41622b46ba2957a319ba332": Process exited with status 1
	stdout:
	
	stderr:
	E0530 21:29:55.507570    2016 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8b937589f5123704b39c302945ea7c253d4a5a2cf41622b46ba2957a319ba332\": not found" containerID="8b937589f5123704b39c302945ea7c253d4a5a2cf41622b46ba2957a319ba332"
	time="2023-05-30T21:29:55Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"8b937589f5123704b39c302945ea7c253d4a5a2cf41622b46ba2957a319ba332\": not found"
	 output: 
	** stderr ** 
	E0530 21:29:55.507570    2016 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8b937589f5123704b39c302945ea7c253d4a5a2cf41622b46ba2957a319ba332\": not found" containerID="8b937589f5123704b39c302945ea7c253d4a5a2cf41622b46ba2957a319ba332"
	time="2023-05-30T21:29:55Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"8b937589f5123704b39c302945ea7c253d4a5a2cf41622b46ba2957a319ba332\": not found"
	
	** /stderr **
	I0530 21:29:55.516057 2415330 logs.go:123] Gathering logs for kube-scheduler [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f] ...
	I0530 21:29:55.516082 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:29:55.596069 2415330 logs.go:123] Gathering logs for kube-controller-manager [6ec6d59f57c47608ec4198fb192aeebf2336ff23354bc0b6298f66b0a073e9f5] ...
	I0530 21:29:55.596151 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ec6d59f57c47608ec4198fb192aeebf2336ff23354bc0b6298f66b0a073e9f5"
	I0530 21:29:55.662341 2415330 logs.go:123] Gathering logs for containerd ...
	I0530 21:29:55.662426 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0530 21:29:58.234321 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:29:58.234746 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:29:58.234792 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0530 21:29:58.234846 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0530 21:29:58.270140 2415330 cri.go:88] found id: "d8d0fa925414fc01b168f72398e6bcc5ef67f26fc5a8bb74c73d7735eb141c12"
	I0530 21:29:58.270161 2415330 cri.go:88] found id: ""
	I0530 21:29:58.270168 2415330 logs.go:284] 1 containers: [d8d0fa925414fc01b168f72398e6bcc5ef67f26fc5a8bb74c73d7735eb141c12]
	I0530 21:29:58.270233 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:29:58.275630 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0530 21:29:58.275706 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0530 21:29:58.306630 2415330 cri.go:88] found id: "38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061"
	I0530 21:29:58.306650 2415330 cri.go:88] found id: ""
	I0530 21:29:58.306657 2415330 logs.go:284] 1 containers: [38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061]
	I0530 21:29:58.306714 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:29:58.311234 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0530 21:29:58.311318 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0530 21:29:58.342783 2415330 cri.go:88] found id: ""
	I0530 21:29:58.342857 2415330 logs.go:284] 0 containers: []
	W0530 21:29:58.342872 2415330 logs.go:286] No container was found matching "coredns"
	I0530 21:29:58.342882 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0530 21:29:58.342942 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0530 21:29:58.377799 2415330 cri.go:88] found id: "7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:29:58.377819 2415330 cri.go:88] found id: ""
	I0530 21:29:58.377826 2415330 logs.go:284] 1 containers: [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f]
	I0530 21:29:58.377880 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:29:58.382262 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0530 21:29:58.382329 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0530 21:29:58.412371 2415330 cri.go:88] found id: ""
	I0530 21:29:58.412392 2415330 logs.go:284] 0 containers: []
	W0530 21:29:58.412400 2415330 logs.go:286] No container was found matching "kube-proxy"
	I0530 21:29:58.412406 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0530 21:29:58.412472 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0530 21:29:58.444277 2415330 cri.go:88] found id: "dea43e60aea7baa612a6700d77173899a521044803a652fbfc2ab1269d27db62"
	I0530 21:29:58.445333 2415330 cri.go:88] found id: "6ec6d59f57c47608ec4198fb192aeebf2336ff23354bc0b6298f66b0a073e9f5"
	I0530 21:29:58.445341 2415330 cri.go:88] found id: ""
	I0530 21:29:58.445349 2415330 logs.go:284] 2 containers: [dea43e60aea7baa612a6700d77173899a521044803a652fbfc2ab1269d27db62 6ec6d59f57c47608ec4198fb192aeebf2336ff23354bc0b6298f66b0a073e9f5]
	I0530 21:29:58.445407 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:29:58.450008 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:29:58.454425 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0530 21:29:58.454549 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0530 21:29:58.497648 2415330 cri.go:88] found id: ""
	I0530 21:29:58.497669 2415330 logs.go:284] 0 containers: []
	W0530 21:29:58.497686 2415330 logs.go:286] No container was found matching "kindnet"
	I0530 21:29:58.497691 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0530 21:29:58.497750 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0530 21:29:58.540117 2415330 cri.go:88] found id: ""
	I0530 21:29:58.540146 2415330 logs.go:284] 0 containers: []
	W0530 21:29:58.540155 2415330 logs.go:286] No container was found matching "storage-provisioner"
	I0530 21:29:58.540172 2415330 logs.go:123] Gathering logs for kube-apiserver [d8d0fa925414fc01b168f72398e6bcc5ef67f26fc5a8bb74c73d7735eb141c12] ...
	I0530 21:29:58.540184 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8d0fa925414fc01b168f72398e6bcc5ef67f26fc5a8bb74c73d7735eb141c12"
	I0530 21:29:58.592073 2415330 logs.go:123] Gathering logs for kube-controller-manager [dea43e60aea7baa612a6700d77173899a521044803a652fbfc2ab1269d27db62] ...
	I0530 21:29:58.592107 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dea43e60aea7baa612a6700d77173899a521044803a652fbfc2ab1269d27db62"
	I0530 21:29:58.630668 2415330 logs.go:123] Gathering logs for kube-controller-manager [6ec6d59f57c47608ec4198fb192aeebf2336ff23354bc0b6298f66b0a073e9f5] ...
	I0530 21:29:58.630701 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ec6d59f57c47608ec4198fb192aeebf2336ff23354bc0b6298f66b0a073e9f5"
	I0530 21:29:58.691030 2415330 logs.go:123] Gathering logs for containerd ...
	I0530 21:29:58.691063 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0530 21:29:58.756790 2415330 logs.go:123] Gathering logs for container status ...
	I0530 21:29:58.756823 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0530 21:29:58.821135 2415330 logs.go:123] Gathering logs for kubelet ...
	I0530 21:29:58.821181 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0530 21:29:58.909755 2415330 logs.go:123] Gathering logs for dmesg ...
	I0530 21:29:58.909793 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0530 21:29:58.930681 2415330 logs.go:123] Gathering logs for describe nodes ...
	I0530 21:29:58.930712 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0530 21:29:59.047746 2415330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0530 21:29:59.047769 2415330 logs.go:123] Gathering logs for etcd [38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061] ...
	I0530 21:29:59.047782 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061"
	I0530 21:29:59.089893 2415330 logs.go:123] Gathering logs for kube-scheduler [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f] ...
	I0530 21:29:59.089923 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:30:01.656646 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:30:01.659255 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:30:01.659336 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0530 21:30:01.659409 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0530 21:30:01.719395 2415330 cri.go:88] found id: "d8d0fa925414fc01b168f72398e6bcc5ef67f26fc5a8bb74c73d7735eb141c12"
	I0530 21:30:01.719424 2415330 cri.go:88] found id: ""
	I0530 21:30:01.719432 2415330 logs.go:284] 1 containers: [d8d0fa925414fc01b168f72398e6bcc5ef67f26fc5a8bb74c73d7735eb141c12]
	I0530 21:30:01.719499 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:30:01.741736 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0530 21:30:01.741823 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0530 21:30:01.827986 2415330 cri.go:88] found id: "38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061"
	I0530 21:30:01.828007 2415330 cri.go:88] found id: ""
	I0530 21:30:01.828016 2415330 logs.go:284] 1 containers: [38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061]
	I0530 21:30:01.828088 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:30:01.834485 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0530 21:30:01.834556 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0530 21:30:01.967356 2415330 cri.go:88] found id: ""
	I0530 21:30:01.967378 2415330 logs.go:284] 0 containers: []
	W0530 21:30:01.967387 2415330 logs.go:286] No container was found matching "coredns"
	I0530 21:30:01.967394 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0530 21:30:01.967484 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0530 21:30:02.087509 2415330 cri.go:88] found id: "7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:30:02.087529 2415330 cri.go:88] found id: ""
	I0530 21:30:02.087538 2415330 logs.go:284] 1 containers: [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f]
	I0530 21:30:02.087626 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:30:02.096002 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0530 21:30:02.096080 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0530 21:30:02.388755 2415330 cri.go:88] found id: ""
	I0530 21:30:02.388794 2415330 logs.go:284] 0 containers: []
	W0530 21:30:02.388804 2415330 logs.go:286] No container was found matching "kube-proxy"
	I0530 21:30:02.388811 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0530 21:30:02.388890 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0530 21:30:02.465140 2415330 cri.go:88] found id: "dea43e60aea7baa612a6700d77173899a521044803a652fbfc2ab1269d27db62"
	I0530 21:30:02.465167 2415330 cri.go:88] found id: "6ec6d59f57c47608ec4198fb192aeebf2336ff23354bc0b6298f66b0a073e9f5"
	I0530 21:30:02.465173 2415330 cri.go:88] found id: ""
	I0530 21:30:02.465180 2415330 logs.go:284] 2 containers: [dea43e60aea7baa612a6700d77173899a521044803a652fbfc2ab1269d27db62 6ec6d59f57c47608ec4198fb192aeebf2336ff23354bc0b6298f66b0a073e9f5]
	I0530 21:30:02.465246 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:30:02.472726 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:30:02.484278 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0530 21:30:02.484380 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0530 21:30:02.535500 2415330 cri.go:88] found id: ""
	I0530 21:30:02.535526 2415330 logs.go:284] 0 containers: []
	W0530 21:30:02.535535 2415330 logs.go:286] No container was found matching "kindnet"
	I0530 21:30:02.535542 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0530 21:30:02.535607 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0530 21:30:02.587451 2415330 cri.go:88] found id: ""
	I0530 21:30:02.587472 2415330 logs.go:284] 0 containers: []
	W0530 21:30:02.587481 2415330 logs.go:286] No container was found matching "storage-provisioner"
	I0530 21:30:02.587496 2415330 logs.go:123] Gathering logs for dmesg ...
	I0530 21:30:02.587510 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0530 21:30:02.615947 2415330 logs.go:123] Gathering logs for kube-apiserver [d8d0fa925414fc01b168f72398e6bcc5ef67f26fc5a8bb74c73d7735eb141c12] ...
	I0530 21:30:02.615982 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8d0fa925414fc01b168f72398e6bcc5ef67f26fc5a8bb74c73d7735eb141c12"
	I0530 21:30:02.675062 2415330 logs.go:123] Gathering logs for etcd [38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061] ...
	I0530 21:30:02.675096 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061"
	I0530 21:30:02.722249 2415330 logs.go:123] Gathering logs for kube-scheduler [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f] ...
	I0530 21:30:02.722279 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:30:02.789185 2415330 logs.go:123] Gathering logs for kube-controller-manager [dea43e60aea7baa612a6700d77173899a521044803a652fbfc2ab1269d27db62] ...
	I0530 21:30:02.789228 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dea43e60aea7baa612a6700d77173899a521044803a652fbfc2ab1269d27db62"
	I0530 21:30:02.829163 2415330 logs.go:123] Gathering logs for container status ...
	I0530 21:30:02.829194 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0530 21:30:02.873141 2415330 logs.go:123] Gathering logs for kubelet ...
	I0530 21:30:02.873171 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0530 21:30:02.960184 2415330 logs.go:123] Gathering logs for describe nodes ...
	I0530 21:30:02.960222 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0530 21:30:03.065608 2415330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0530 21:30:03.065633 2415330 logs.go:123] Gathering logs for kube-controller-manager [6ec6d59f57c47608ec4198fb192aeebf2336ff23354bc0b6298f66b0a073e9f5] ...
	I0530 21:30:03.065647 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ec6d59f57c47608ec4198fb192aeebf2336ff23354bc0b6298f66b0a073e9f5"
	I0530 21:30:03.133416 2415330 logs.go:123] Gathering logs for containerd ...
	I0530 21:30:03.133451 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0530 21:30:05.700119 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:30:05.700519 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:30:05.700560 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0530 21:30:05.700616 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0530 21:30:05.731569 2415330 cri.go:88] found id: "d8d0fa925414fc01b168f72398e6bcc5ef67f26fc5a8bb74c73d7735eb141c12"
	I0530 21:30:05.731592 2415330 cri.go:88] found id: ""
	I0530 21:30:05.731600 2415330 logs.go:284] 1 containers: [d8d0fa925414fc01b168f72398e6bcc5ef67f26fc5a8bb74c73d7735eb141c12]
	I0530 21:30:05.731682 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:30:05.736230 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0530 21:30:05.736314 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0530 21:30:05.765974 2415330 cri.go:88] found id: "38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061"
	I0530 21:30:05.765995 2415330 cri.go:88] found id: ""
	I0530 21:30:05.766002 2415330 logs.go:284] 1 containers: [38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061]
	I0530 21:30:05.766061 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:30:05.770937 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0530 21:30:05.771010 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0530 21:30:05.801604 2415330 cri.go:88] found id: ""
	I0530 21:30:05.801625 2415330 logs.go:284] 0 containers: []
	W0530 21:30:05.801633 2415330 logs.go:286] No container was found matching "coredns"
	I0530 21:30:05.801639 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0530 21:30:05.801700 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0530 21:30:05.832609 2415330 cri.go:88] found id: "7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:30:05.832630 2415330 cri.go:88] found id: ""
	I0530 21:30:05.832637 2415330 logs.go:284] 1 containers: [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f]
	I0530 21:30:05.832691 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:30:05.837161 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0530 21:30:05.837230 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0530 21:30:05.866336 2415330 cri.go:88] found id: ""
	I0530 21:30:05.866358 2415330 logs.go:284] 0 containers: []
	W0530 21:30:05.866384 2415330 logs.go:286] No container was found matching "kube-proxy"
	I0530 21:30:05.866393 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0530 21:30:05.866461 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0530 21:30:05.895851 2415330 cri.go:88] found id: "dea43e60aea7baa612a6700d77173899a521044803a652fbfc2ab1269d27db62"
	I0530 21:30:05.895919 2415330 cri.go:88] found id: ""
	I0530 21:30:05.895933 2415330 logs.go:284] 1 containers: [dea43e60aea7baa612a6700d77173899a521044803a652fbfc2ab1269d27db62]
	I0530 21:30:05.896003 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:30:05.900698 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0530 21:30:05.900808 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0530 21:30:05.938485 2415330 cri.go:88] found id: ""
	I0530 21:30:05.938508 2415330 logs.go:284] 0 containers: []
	W0530 21:30:05.938527 2415330 logs.go:286] No container was found matching "kindnet"
	I0530 21:30:05.938533 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0530 21:30:05.938606 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0530 21:30:05.970553 2415330 cri.go:88] found id: ""
	I0530 21:30:05.970625 2415330 logs.go:284] 0 containers: []
	W0530 21:30:05.970646 2415330 logs.go:286] No container was found matching "storage-provisioner"
	I0530 21:30:05.970677 2415330 logs.go:123] Gathering logs for kubelet ...
	I0530 21:30:05.970699 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0530 21:30:06.045251 2415330 logs.go:123] Gathering logs for describe nodes ...
	I0530 21:30:06.045293 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0530 21:30:06.148219 2415330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0530 21:30:06.148240 2415330 logs.go:123] Gathering logs for kube-apiserver [d8d0fa925414fc01b168f72398e6bcc5ef67f26fc5a8bb74c73d7735eb141c12] ...
	I0530 21:30:06.148252 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8d0fa925414fc01b168f72398e6bcc5ef67f26fc5a8bb74c73d7735eb141c12"
	I0530 21:30:06.201722 2415330 logs.go:123] Gathering logs for kube-scheduler [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f] ...
	I0530 21:30:06.201768 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:30:06.288882 2415330 logs.go:123] Gathering logs for container status ...
	I0530 21:30:06.288922 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0530 21:30:06.352165 2415330 logs.go:123] Gathering logs for dmesg ...
	I0530 21:30:06.352196 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0530 21:30:06.379102 2415330 logs.go:123] Gathering logs for etcd [38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061] ...
	I0530 21:30:06.379129 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061"
	I0530 21:30:06.417241 2415330 logs.go:123] Gathering logs for kube-controller-manager [dea43e60aea7baa612a6700d77173899a521044803a652fbfc2ab1269d27db62] ...
	I0530 21:30:06.417270 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dea43e60aea7baa612a6700d77173899a521044803a652fbfc2ab1269d27db62"
	I0530 21:30:06.470768 2415330 logs.go:123] Gathering logs for containerd ...
	I0530 21:30:06.470808 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0530 21:30:09.036110 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:30:09.036601 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:30:09.036657 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0530 21:30:09.036715 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0530 21:30:09.067080 2415330 cri.go:88] found id: "d8d0fa925414fc01b168f72398e6bcc5ef67f26fc5a8bb74c73d7735eb141c12"
	I0530 21:30:09.067103 2415330 cri.go:88] found id: ""
	I0530 21:30:09.067110 2415330 logs.go:284] 1 containers: [d8d0fa925414fc01b168f72398e6bcc5ef67f26fc5a8bb74c73d7735eb141c12]
	I0530 21:30:09.067178 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:30:09.072793 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0530 21:30:09.072883 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0530 21:30:09.102632 2415330 cri.go:88] found id: "38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061"
	I0530 21:30:09.102653 2415330 cri.go:88] found id: ""
	I0530 21:30:09.102661 2415330 logs.go:284] 1 containers: [38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061]
	I0530 21:30:09.102721 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:30:09.107509 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0530 21:30:09.107586 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0530 21:30:09.139433 2415330 cri.go:88] found id: ""
	I0530 21:30:09.139454 2415330 logs.go:284] 0 containers: []
	W0530 21:30:09.139463 2415330 logs.go:286] No container was found matching "coredns"
	I0530 21:30:09.139469 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0530 21:30:09.139533 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0530 21:30:09.170873 2415330 cri.go:88] found id: "7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:30:09.170895 2415330 cri.go:88] found id: ""
	I0530 21:30:09.170902 2415330 logs.go:284] 1 containers: [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f]
	I0530 21:30:09.170960 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:30:09.175470 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0530 21:30:09.175544 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0530 21:30:09.207758 2415330 cri.go:88] found id: ""
	I0530 21:30:09.207783 2415330 logs.go:284] 0 containers: []
	W0530 21:30:09.207796 2415330 logs.go:286] No container was found matching "kube-proxy"
	I0530 21:30:09.207803 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0530 21:30:09.207869 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0530 21:30:09.239517 2415330 cri.go:88] found id: "dea43e60aea7baa612a6700d77173899a521044803a652fbfc2ab1269d27db62"
	I0530 21:30:09.239541 2415330 cri.go:88] found id: ""
	I0530 21:30:09.239550 2415330 logs.go:284] 1 containers: [dea43e60aea7baa612a6700d77173899a521044803a652fbfc2ab1269d27db62]
	I0530 21:30:09.239612 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:30:09.244550 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0530 21:30:09.244623 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0530 21:30:09.275286 2415330 cri.go:88] found id: ""
	I0530 21:30:09.275308 2415330 logs.go:284] 0 containers: []
	W0530 21:30:09.275317 2415330 logs.go:286] No container was found matching "kindnet"
	I0530 21:30:09.275323 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0530 21:30:09.275393 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0530 21:30:09.307778 2415330 cri.go:88] found id: ""
	I0530 21:30:09.307802 2415330 logs.go:284] 0 containers: []
	W0530 21:30:09.307811 2415330 logs.go:286] No container was found matching "storage-provisioner"
	I0530 21:30:09.307826 2415330 logs.go:123] Gathering logs for kubelet ...
	I0530 21:30:09.307837 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0530 21:30:09.383730 2415330 logs.go:123] Gathering logs for containerd ...
	I0530 21:30:09.383767 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0530 21:30:09.444142 2415330 logs.go:123] Gathering logs for container status ...
	I0530 21:30:09.444177 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0530 21:30:09.478719 2415330 logs.go:123] Gathering logs for dmesg ...
	I0530 21:30:09.478745 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0530 21:30:09.498659 2415330 logs.go:123] Gathering logs for describe nodes ...
	I0530 21:30:09.498687 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0530 21:30:09.580705 2415330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0530 21:30:09.580725 2415330 logs.go:123] Gathering logs for kube-apiserver [d8d0fa925414fc01b168f72398e6bcc5ef67f26fc5a8bb74c73d7735eb141c12] ...
	I0530 21:30:09.580738 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8d0fa925414fc01b168f72398e6bcc5ef67f26fc5a8bb74c73d7735eb141c12"
	I0530 21:30:09.616028 2415330 logs.go:123] Gathering logs for etcd [38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061] ...
	I0530 21:30:09.616059 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061"
	I0530 21:30:09.655231 2415330 logs.go:123] Gathering logs for kube-scheduler [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f] ...
	I0530 21:30:09.655261 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:30:09.714635 2415330 logs.go:123] Gathering logs for kube-controller-manager [dea43e60aea7baa612a6700d77173899a521044803a652fbfc2ab1269d27db62] ...
	I0530 21:30:09.714676 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dea43e60aea7baa612a6700d77173899a521044803a652fbfc2ab1269d27db62"
	I0530 21:30:12.272108 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:30:12.272526 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:30:12.272566 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0530 21:30:12.272630 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0530 21:30:12.302844 2415330 cri.go:88] found id: "d8d0fa925414fc01b168f72398e6bcc5ef67f26fc5a8bb74c73d7735eb141c12"
	I0530 21:30:12.302865 2415330 cri.go:88] found id: ""
	I0530 21:30:12.302873 2415330 logs.go:284] 1 containers: [d8d0fa925414fc01b168f72398e6bcc5ef67f26fc5a8bb74c73d7735eb141c12]
	I0530 21:30:12.302932 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:30:12.307442 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0530 21:30:12.307511 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0530 21:30:12.338560 2415330 cri.go:88] found id: "38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061"
	I0530 21:30:12.338584 2415330 cri.go:88] found id: ""
	I0530 21:30:12.338592 2415330 logs.go:284] 1 containers: [38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061]
	I0530 21:30:12.338651 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:30:12.343302 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0530 21:30:12.343385 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0530 21:30:12.374700 2415330 cri.go:88] found id: ""
	I0530 21:30:12.374722 2415330 logs.go:284] 0 containers: []
	W0530 21:30:12.374730 2415330 logs.go:286] No container was found matching "coredns"
	I0530 21:30:12.374736 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0530 21:30:12.374800 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0530 21:30:12.405708 2415330 cri.go:88] found id: "7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:30:12.405730 2415330 cri.go:88] found id: ""
	I0530 21:30:12.405737 2415330 logs.go:284] 1 containers: [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f]
	I0530 21:30:12.405797 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:30:12.410157 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0530 21:30:12.410231 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0530 21:30:12.440908 2415330 cri.go:88] found id: ""
	I0530 21:30:12.440934 2415330 logs.go:284] 0 containers: []
	W0530 21:30:12.440942 2415330 logs.go:286] No container was found matching "kube-proxy"
	I0530 21:30:12.440949 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0530 21:30:12.441015 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0530 21:30:12.476656 2415330 cri.go:88] found id: "dea43e60aea7baa612a6700d77173899a521044803a652fbfc2ab1269d27db62"
	I0530 21:30:12.476677 2415330 cri.go:88] found id: ""
	I0530 21:30:12.476687 2415330 logs.go:284] 1 containers: [dea43e60aea7baa612a6700d77173899a521044803a652fbfc2ab1269d27db62]
	I0530 21:30:12.476743 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:30:12.481448 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0530 21:30:12.481514 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0530 21:30:12.512331 2415330 cri.go:88] found id: ""
	I0530 21:30:12.512353 2415330 logs.go:284] 0 containers: []
	W0530 21:30:12.512361 2415330 logs.go:286] No container was found matching "kindnet"
	I0530 21:30:12.512367 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0530 21:30:12.512429 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0530 21:30:12.544893 2415330 cri.go:88] found id: ""
	I0530 21:30:12.544916 2415330 logs.go:284] 0 containers: []
	W0530 21:30:12.544924 2415330 logs.go:286] No container was found matching "storage-provisioner"
	I0530 21:30:12.544938 2415330 logs.go:123] Gathering logs for kube-apiserver [d8d0fa925414fc01b168f72398e6bcc5ef67f26fc5a8bb74c73d7735eb141c12] ...
	I0530 21:30:12.544953 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8d0fa925414fc01b168f72398e6bcc5ef67f26fc5a8bb74c73d7735eb141c12"
	I0530 21:30:12.579897 2415330 logs.go:123] Gathering logs for etcd [38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061] ...
	I0530 21:30:12.579926 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061"
	I0530 21:30:12.616494 2415330 logs.go:123] Gathering logs for kube-scheduler [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f] ...
	I0530 21:30:12.616522 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:30:12.676281 2415330 logs.go:123] Gathering logs for kube-controller-manager [dea43e60aea7baa612a6700d77173899a521044803a652fbfc2ab1269d27db62] ...
	I0530 21:30:12.676318 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dea43e60aea7baa612a6700d77173899a521044803a652fbfc2ab1269d27db62"
	I0530 21:30:12.725939 2415330 logs.go:123] Gathering logs for container status ...
	I0530 21:30:12.725974 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0530 21:30:12.764533 2415330 logs.go:123] Gathering logs for dmesg ...
	I0530 21:30:12.764572 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0530 21:30:12.785506 2415330 logs.go:123] Gathering logs for describe nodes ...
	I0530 21:30:12.785537 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0530 21:30:12.870491 2415330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0530 21:30:12.870514 2415330 logs.go:123] Gathering logs for kubelet ...
	I0530 21:30:12.870529 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0530 21:30:12.947039 2415330 logs.go:123] Gathering logs for containerd ...
	I0530 21:30:12.947076 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0530 21:30:15.511380 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:30:15.511811 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:30:15.511864 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0530 21:30:15.511921 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0530 21:30:15.542000 2415330 cri.go:88] found id: "d8d0fa925414fc01b168f72398e6bcc5ef67f26fc5a8bb74c73d7735eb141c12"
	I0530 21:30:15.542022 2415330 cri.go:88] found id: ""
	I0530 21:30:15.542031 2415330 logs.go:284] 1 containers: [d8d0fa925414fc01b168f72398e6bcc5ef67f26fc5a8bb74c73d7735eb141c12]
	I0530 21:30:15.542090 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:30:15.546733 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0530 21:30:15.546803 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0530 21:30:15.578588 2415330 cri.go:88] found id: "38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061"
	I0530 21:30:15.578609 2415330 cri.go:88] found id: ""
	I0530 21:30:15.578617 2415330 logs.go:284] 1 containers: [38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061]
	I0530 21:30:15.578677 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:30:15.583239 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0530 21:30:15.583315 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0530 21:30:15.613710 2415330 cri.go:88] found id: ""
	I0530 21:30:15.613731 2415330 logs.go:284] 0 containers: []
	W0530 21:30:15.613739 2415330 logs.go:286] No container was found matching "coredns"
	I0530 21:30:15.613746 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0530 21:30:15.613806 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0530 21:30:15.644906 2415330 cri.go:88] found id: "7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:30:15.644927 2415330 cri.go:88] found id: ""
	I0530 21:30:15.644935 2415330 logs.go:284] 1 containers: [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f]
	I0530 21:30:15.644995 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:30:15.649757 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0530 21:30:15.649867 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0530 21:30:15.692206 2415330 cri.go:88] found id: ""
	I0530 21:30:15.692232 2415330 logs.go:284] 0 containers: []
	W0530 21:30:15.692241 2415330 logs.go:286] No container was found matching "kube-proxy"
	I0530 21:30:15.692249 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0530 21:30:15.692315 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0530 21:30:15.724601 2415330 cri.go:88] found id: "dea43e60aea7baa612a6700d77173899a521044803a652fbfc2ab1269d27db62"
	I0530 21:30:15.724636 2415330 cri.go:88] found id: ""
	I0530 21:30:15.724645 2415330 logs.go:284] 1 containers: [dea43e60aea7baa612a6700d77173899a521044803a652fbfc2ab1269d27db62]
	I0530 21:30:15.724702 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:30:15.729451 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0530 21:30:15.729523 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0530 21:30:15.760071 2415330 cri.go:88] found id: ""
	I0530 21:30:15.760095 2415330 logs.go:284] 0 containers: []
	W0530 21:30:15.760105 2415330 logs.go:286] No container was found matching "kindnet"
	I0530 21:30:15.760112 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0530 21:30:15.760170 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0530 21:30:15.790845 2415330 cri.go:88] found id: ""
	I0530 21:30:15.790918 2415330 logs.go:284] 0 containers: []
	W0530 21:30:15.790937 2415330 logs.go:286] No container was found matching "storage-provisioner"
	I0530 21:30:15.790953 2415330 logs.go:123] Gathering logs for dmesg ...
	I0530 21:30:15.790965 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0530 21:30:15.813153 2415330 logs.go:123] Gathering logs for describe nodes ...
	I0530 21:30:15.813187 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0530 21:30:15.911675 2415330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0530 21:30:15.911698 2415330 logs.go:123] Gathering logs for kube-apiserver [d8d0fa925414fc01b168f72398e6bcc5ef67f26fc5a8bb74c73d7735eb141c12] ...
	I0530 21:30:15.911710 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8d0fa925414fc01b168f72398e6bcc5ef67f26fc5a8bb74c73d7735eb141c12"
	I0530 21:30:15.947063 2415330 logs.go:123] Gathering logs for kube-scheduler [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f] ...
	I0530 21:30:15.947095 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:30:16.006179 2415330 logs.go:123] Gathering logs for kubelet ...
	I0530 21:30:16.006217 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0530 21:30:16.085407 2415330 logs.go:123] Gathering logs for etcd [38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061] ...
	I0530 21:30:16.085445 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061"
	I0530 21:30:16.118219 2415330 logs.go:123] Gathering logs for kube-controller-manager [dea43e60aea7baa612a6700d77173899a521044803a652fbfc2ab1269d27db62] ...
	I0530 21:30:16.118250 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dea43e60aea7baa612a6700d77173899a521044803a652fbfc2ab1269d27db62"
	I0530 21:30:16.167820 2415330 logs.go:123] Gathering logs for containerd ...
	I0530 21:30:16.167853 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0530 21:30:16.225665 2415330 logs.go:123] Gathering logs for container status ...
	I0530 21:30:16.225705 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0530 21:30:18.763626 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:30:18.764027 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:30:18.764067 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0530 21:30:18.764128 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0530 21:30:18.794855 2415330 cri.go:88] found id: "d8d0fa925414fc01b168f72398e6bcc5ef67f26fc5a8bb74c73d7735eb141c12"
	I0530 21:30:18.794876 2415330 cri.go:88] found id: ""
	I0530 21:30:18.794887 2415330 logs.go:284] 1 containers: [d8d0fa925414fc01b168f72398e6bcc5ef67f26fc5a8bb74c73d7735eb141c12]
	I0530 21:30:18.794944 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:30:18.799652 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0530 21:30:18.799724 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0530 21:30:18.831746 2415330 cri.go:88] found id: "38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061"
	I0530 21:30:18.831765 2415330 cri.go:88] found id: ""
	I0530 21:30:18.831772 2415330 logs.go:284] 1 containers: [38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061]
	I0530 21:30:18.831844 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:30:18.836609 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0530 21:30:18.836680 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0530 21:30:18.870175 2415330 cri.go:88] found id: ""
	I0530 21:30:18.870202 2415330 logs.go:284] 0 containers: []
	W0530 21:30:18.870211 2415330 logs.go:286] No container was found matching "coredns"
	I0530 21:30:18.870227 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0530 21:30:18.870307 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0530 21:30:18.906329 2415330 cri.go:88] found id: "7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:30:18.906349 2415330 cri.go:88] found id: ""
	I0530 21:30:18.906356 2415330 logs.go:284] 1 containers: [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f]
	I0530 21:30:18.906414 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:30:18.911657 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0530 21:30:18.911747 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0530 21:30:18.942266 2415330 cri.go:88] found id: ""
	I0530 21:30:18.942287 2415330 logs.go:284] 0 containers: []
	W0530 21:30:18.942295 2415330 logs.go:286] No container was found matching "kube-proxy"
	I0530 21:30:18.942301 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0530 21:30:18.942365 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0530 21:30:18.971478 2415330 cri.go:88] found id: "dea43e60aea7baa612a6700d77173899a521044803a652fbfc2ab1269d27db62"
	I0530 21:30:18.971499 2415330 cri.go:88] found id: ""
	I0530 21:30:18.971506 2415330 logs.go:284] 1 containers: [dea43e60aea7baa612a6700d77173899a521044803a652fbfc2ab1269d27db62]
	I0530 21:30:18.971563 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:30:18.976076 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0530 21:30:18.976153 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0530 21:30:19.006398 2415330 cri.go:88] found id: ""
	I0530 21:30:19.006467 2415330 logs.go:284] 0 containers: []
	W0530 21:30:19.006489 2415330 logs.go:286] No container was found matching "kindnet"
	I0530 21:30:19.006510 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0530 21:30:19.006576 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0530 21:30:19.040905 2415330 cri.go:88] found id: ""
	I0530 21:30:19.040930 2415330 logs.go:284] 0 containers: []
	W0530 21:30:19.040939 2415330 logs.go:286] No container was found matching "storage-provisioner"
	I0530 21:30:19.040953 2415330 logs.go:123] Gathering logs for container status ...
	I0530 21:30:19.040965 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0530 21:30:19.077003 2415330 logs.go:123] Gathering logs for kubelet ...
	I0530 21:30:19.077032 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0530 21:30:19.153242 2415330 logs.go:123] Gathering logs for describe nodes ...
	I0530 21:30:19.153279 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0530 21:30:19.243964 2415330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0530 21:30:19.243989 2415330 logs.go:123] Gathering logs for etcd [38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061] ...
	I0530 21:30:19.244003 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061"
	I0530 21:30:19.278724 2415330 logs.go:123] Gathering logs for kube-scheduler [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f] ...
	I0530 21:30:19.278789 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:30:19.336291 2415330 logs.go:123] Gathering logs for kube-controller-manager [dea43e60aea7baa612a6700d77173899a521044803a652fbfc2ab1269d27db62] ...
	I0530 21:30:19.336328 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dea43e60aea7baa612a6700d77173899a521044803a652fbfc2ab1269d27db62"
	I0530 21:30:19.387684 2415330 logs.go:123] Gathering logs for containerd ...
	I0530 21:30:19.387718 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0530 21:30:19.449406 2415330 logs.go:123] Gathering logs for dmesg ...
	I0530 21:30:19.449439 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0530 21:30:19.470256 2415330 logs.go:123] Gathering logs for kube-apiserver [d8d0fa925414fc01b168f72398e6bcc5ef67f26fc5a8bb74c73d7735eb141c12] ...
	I0530 21:30:19.470339 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8d0fa925414fc01b168f72398e6bcc5ef67f26fc5a8bb74c73d7735eb141c12"
	I0530 21:30:22.008116 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:30:22.008581 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:30:22.008632 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0530 21:30:22.008693 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0530 21:30:22.040441 2415330 cri.go:88] found id: "d8d0fa925414fc01b168f72398e6bcc5ef67f26fc5a8bb74c73d7735eb141c12"
	I0530 21:30:22.040468 2415330 cri.go:88] found id: ""
	I0530 21:30:22.040477 2415330 logs.go:284] 1 containers: [d8d0fa925414fc01b168f72398e6bcc5ef67f26fc5a8bb74c73d7735eb141c12]
	I0530 21:30:22.040544 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:30:22.045483 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0530 21:30:22.045553 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0530 21:30:22.075788 2415330 cri.go:88] found id: "38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061"
	I0530 21:30:22.075809 2415330 cri.go:88] found id: ""
	I0530 21:30:22.075816 2415330 logs.go:284] 1 containers: [38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061]
	I0530 21:30:22.075893 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:30:22.080895 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0530 21:30:22.080984 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0530 21:30:22.111158 2415330 cri.go:88] found id: ""
	I0530 21:30:22.111187 2415330 logs.go:284] 0 containers: []
	W0530 21:30:22.111195 2415330 logs.go:286] No container was found matching "coredns"
	I0530 21:30:22.111202 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0530 21:30:22.111270 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0530 21:30:22.143351 2415330 cri.go:88] found id: "7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:30:22.143370 2415330 cri.go:88] found id: ""
	I0530 21:30:22.143378 2415330 logs.go:284] 1 containers: [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f]
	I0530 21:30:22.143433 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:30:22.147793 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0530 21:30:22.147862 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0530 21:30:22.177954 2415330 cri.go:88] found id: ""
	I0530 21:30:22.177975 2415330 logs.go:284] 0 containers: []
	W0530 21:30:22.177982 2415330 logs.go:286] No container was found matching "kube-proxy"
	I0530 21:30:22.177989 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0530 21:30:22.178053 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0530 21:30:22.209804 2415330 cri.go:88] found id: "dea43e60aea7baa612a6700d77173899a521044803a652fbfc2ab1269d27db62"
	I0530 21:30:22.209825 2415330 cri.go:88] found id: ""
	I0530 21:30:22.209834 2415330 logs.go:284] 1 containers: [dea43e60aea7baa612a6700d77173899a521044803a652fbfc2ab1269d27db62]
	I0530 21:30:22.209894 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:30:22.214299 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0530 21:30:22.214373 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0530 21:30:22.244611 2415330 cri.go:88] found id: ""
	I0530 21:30:22.244680 2415330 logs.go:284] 0 containers: []
	W0530 21:30:22.244704 2415330 logs.go:286] No container was found matching "kindnet"
	I0530 21:30:22.244724 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0530 21:30:22.244811 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0530 21:30:22.278603 2415330 cri.go:88] found id: ""
	I0530 21:30:22.278625 2415330 logs.go:284] 0 containers: []
	W0530 21:30:22.278633 2415330 logs.go:286] No container was found matching "storage-provisioner"
	I0530 21:30:22.278647 2415330 logs.go:123] Gathering logs for describe nodes ...
	I0530 21:30:22.278663 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0530 21:30:22.359206 2415330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0530 21:30:22.359240 2415330 logs.go:123] Gathering logs for kube-apiserver [d8d0fa925414fc01b168f72398e6bcc5ef67f26fc5a8bb74c73d7735eb141c12] ...
	I0530 21:30:22.359253 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8d0fa925414fc01b168f72398e6bcc5ef67f26fc5a8bb74c73d7735eb141c12"
	I0530 21:30:22.399464 2415330 logs.go:123] Gathering logs for kube-scheduler [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f] ...
	I0530 21:30:22.399492 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:30:22.453999 2415330 logs.go:123] Gathering logs for kube-controller-manager [dea43e60aea7baa612a6700d77173899a521044803a652fbfc2ab1269d27db62] ...
	I0530 21:30:22.454040 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dea43e60aea7baa612a6700d77173899a521044803a652fbfc2ab1269d27db62"
	I0530 21:30:22.504154 2415330 logs.go:123] Gathering logs for containerd ...
	I0530 21:30:22.504187 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0530 21:30:22.562841 2415330 logs.go:123] Gathering logs for container status ...
	I0530 21:30:22.562880 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0530 21:30:22.599939 2415330 logs.go:123] Gathering logs for dmesg ...
	I0530 21:30:22.599973 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0530 21:30:22.620795 2415330 logs.go:123] Gathering logs for etcd [38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061] ...
	I0530 21:30:22.620833 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061"
	I0530 21:30:22.654129 2415330 logs.go:123] Gathering logs for kubelet ...
	I0530 21:30:22.654156 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0530 21:30:25.229730 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:30:25.230213 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:30:25.230277 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0530 21:30:25.230366 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0530 21:30:25.289227 2415330 cri.go:88] found id: "260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08"
	I0530 21:30:25.289247 2415330 cri.go:88] found id: "d8d0fa925414fc01b168f72398e6bcc5ef67f26fc5a8bb74c73d7735eb141c12"
	I0530 21:30:25.289253 2415330 cri.go:88] found id: ""
	I0530 21:30:25.289260 2415330 logs.go:284] 2 containers: [260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08 d8d0fa925414fc01b168f72398e6bcc5ef67f26fc5a8bb74c73d7735eb141c12]
	I0530 21:30:25.289361 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:30:25.296354 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:30:25.304694 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0530 21:30:25.304775 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0530 21:30:25.363792 2415330 cri.go:88] found id: "38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061"
	I0530 21:30:25.363810 2415330 cri.go:88] found id: ""
	I0530 21:30:25.363823 2415330 logs.go:284] 1 containers: [38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061]
	I0530 21:30:25.363880 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:30:25.368934 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0530 21:30:25.369010 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0530 21:30:25.414498 2415330 cri.go:88] found id: ""
	I0530 21:30:25.414520 2415330 logs.go:284] 0 containers: []
	W0530 21:30:25.414528 2415330 logs.go:286] No container was found matching "coredns"
	I0530 21:30:25.414535 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0530 21:30:25.414591 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0530 21:30:25.452401 2415330 cri.go:88] found id: "7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:30:25.452425 2415330 cri.go:88] found id: ""
	I0530 21:30:25.452433 2415330 logs.go:284] 1 containers: [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f]
	I0530 21:30:25.452546 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:30:25.457634 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0530 21:30:25.457737 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0530 21:30:25.502347 2415330 cri.go:88] found id: ""
	I0530 21:30:25.502370 2415330 logs.go:284] 0 containers: []
	W0530 21:30:25.502379 2415330 logs.go:286] No container was found matching "kube-proxy"
	I0530 21:30:25.502416 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0530 21:30:25.502500 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0530 21:30:25.538997 2415330 cri.go:88] found id: "dea43e60aea7baa612a6700d77173899a521044803a652fbfc2ab1269d27db62"
	I0530 21:30:25.539020 2415330 cri.go:88] found id: ""
	I0530 21:30:25.539029 2415330 logs.go:284] 1 containers: [dea43e60aea7baa612a6700d77173899a521044803a652fbfc2ab1269d27db62]
	I0530 21:30:25.539155 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:30:25.544484 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0530 21:30:25.544594 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0530 21:30:25.583393 2415330 cri.go:88] found id: ""
	I0530 21:30:25.583417 2415330 logs.go:284] 0 containers: []
	W0530 21:30:25.583427 2415330 logs.go:286] No container was found matching "kindnet"
	I0530 21:30:25.583465 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0530 21:30:25.583551 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0530 21:30:25.624609 2415330 cri.go:88] found id: ""
	I0530 21:30:25.624631 2415330 logs.go:284] 0 containers: []
	W0530 21:30:25.624639 2415330 logs.go:286] No container was found matching "storage-provisioner"
	I0530 21:30:25.624684 2415330 logs.go:123] Gathering logs for kubelet ...
	I0530 21:30:25.624702 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0530 21:30:25.709698 2415330 logs.go:123] Gathering logs for dmesg ...
	I0530 21:30:25.709741 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0530 21:30:25.732237 2415330 logs.go:123] Gathering logs for describe nodes ...
	I0530 21:30:25.732269 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0530 21:30:35.864464 2415330 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.132173458s)
	W0530 21:30:35.864501 2415330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I0530 21:30:35.864510 2415330 logs.go:123] Gathering logs for containerd ...
	I0530 21:30:35.864520 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0530 21:30:35.927636 2415330 logs.go:123] Gathering logs for container status ...
	I0530 21:30:35.927670 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0530 21:30:35.963882 2415330 logs.go:123] Gathering logs for kube-apiserver [260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08] ...
	I0530 21:30:35.963913 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08"
	I0530 21:30:36.000526 2415330 logs.go:123] Gathering logs for kube-apiserver [d8d0fa925414fc01b168f72398e6bcc5ef67f26fc5a8bb74c73d7735eb141c12] ...
	I0530 21:30:36.000559 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8d0fa925414fc01b168f72398e6bcc5ef67f26fc5a8bb74c73d7735eb141c12"
	I0530 21:30:36.038011 2415330 logs.go:123] Gathering logs for etcd [38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061] ...
	I0530 21:30:36.038047 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061"
	I0530 21:30:36.070320 2415330 logs.go:123] Gathering logs for kube-scheduler [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f] ...
	I0530 21:30:36.070350 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:30:36.131240 2415330 logs.go:123] Gathering logs for kube-controller-manager [dea43e60aea7baa612a6700d77173899a521044803a652fbfc2ab1269d27db62] ...
	I0530 21:30:36.131281 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dea43e60aea7baa612a6700d77173899a521044803a652fbfc2ab1269d27db62"
	I0530 21:30:38.694353 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:30:43.695361 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0530 21:30:43.695423 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0530 21:30:43.695492 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0530 21:30:43.727744 2415330 cri.go:88] found id: "260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08"
	I0530 21:30:43.727766 2415330 cri.go:88] found id: "d8d0fa925414fc01b168f72398e6bcc5ef67f26fc5a8bb74c73d7735eb141c12"
	I0530 21:30:43.727771 2415330 cri.go:88] found id: ""
	I0530 21:30:43.727778 2415330 logs.go:284] 2 containers: [260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08 d8d0fa925414fc01b168f72398e6bcc5ef67f26fc5a8bb74c73d7735eb141c12]
	I0530 21:30:43.727839 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:30:43.732442 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:30:43.736782 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0530 21:30:43.736858 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0530 21:30:43.767011 2415330 cri.go:88] found id: "38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061"
	I0530 21:30:43.767031 2415330 cri.go:88] found id: ""
	I0530 21:30:43.767039 2415330 logs.go:284] 1 containers: [38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061]
	I0530 21:30:43.767093 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:30:43.771585 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0530 21:30:43.771661 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0530 21:30:43.802293 2415330 cri.go:88] found id: ""
	I0530 21:30:43.802368 2415330 logs.go:284] 0 containers: []
	W0530 21:30:43.802384 2415330 logs.go:286] No container was found matching "coredns"
	I0530 21:30:43.802392 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0530 21:30:43.802464 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0530 21:30:43.833583 2415330 cri.go:88] found id: "7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:30:43.833603 2415330 cri.go:88] found id: ""
	I0530 21:30:43.833610 2415330 logs.go:284] 1 containers: [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f]
	I0530 21:30:43.833670 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:30:43.838194 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0530 21:30:43.838266 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0530 21:30:43.868404 2415330 cri.go:88] found id: ""
	I0530 21:30:43.868425 2415330 logs.go:284] 0 containers: []
	W0530 21:30:43.868433 2415330 logs.go:286] No container was found matching "kube-proxy"
	I0530 21:30:43.868440 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0530 21:30:43.868500 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0530 21:30:43.898999 2415330 cri.go:88] found id: "1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e"
	I0530 21:30:43.899020 2415330 cri.go:88] found id: "dea43e60aea7baa612a6700d77173899a521044803a652fbfc2ab1269d27db62"
	I0530 21:30:43.899026 2415330 cri.go:88] found id: ""
	I0530 21:30:43.899032 2415330 logs.go:284] 2 containers: [1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e dea43e60aea7baa612a6700d77173899a521044803a652fbfc2ab1269d27db62]
	I0530 21:30:43.899129 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:30:43.903611 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:30:43.907986 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0530 21:30:43.908116 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0530 21:30:43.939481 2415330 cri.go:88] found id: ""
	I0530 21:30:43.939502 2415330 logs.go:284] 0 containers: []
	W0530 21:30:43.939511 2415330 logs.go:286] No container was found matching "kindnet"
	I0530 21:30:43.939517 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0530 21:30:43.939577 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0530 21:30:43.970393 2415330 cri.go:88] found id: ""
	I0530 21:30:43.970416 2415330 logs.go:284] 0 containers: []
	W0530 21:30:43.970424 2415330 logs.go:286] No container was found matching "storage-provisioner"
	I0530 21:30:43.970434 2415330 logs.go:123] Gathering logs for etcd [38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061] ...
	I0530 21:30:43.970447 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061"
	I0530 21:30:44.001434 2415330 logs.go:123] Gathering logs for kube-scheduler [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f] ...
	I0530 21:30:44.001524 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:30:44.060311 2415330 logs.go:123] Gathering logs for containerd ...
	I0530 21:30:44.060344 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0530 21:30:44.123759 2415330 logs.go:123] Gathering logs for container status ...
	I0530 21:30:44.123796 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0530 21:30:44.167827 2415330 logs.go:123] Gathering logs for kubelet ...
	I0530 21:30:44.167858 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0530 21:30:44.239607 2415330 logs.go:123] Gathering logs for dmesg ...
	I0530 21:30:44.239642 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0530 21:30:44.260574 2415330 logs.go:123] Gathering logs for describe nodes ...
	I0530 21:30:44.260605 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0530 21:30:47.142356 2415330 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (2.881714442s)
	W0530 21:30:47.142410 2415330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0530 21:30:47.142429 2415330 logs.go:123] Gathering logs for kube-apiserver [d8d0fa925414fc01b168f72398e6bcc5ef67f26fc5a8bb74c73d7735eb141c12] ...
	I0530 21:30:47.142455 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8d0fa925414fc01b168f72398e6bcc5ef67f26fc5a8bb74c73d7735eb141c12"
	W0530 21:30:47.181245 2415330 logs.go:130] failed kube-apiserver [d8d0fa925414fc01b168f72398e6bcc5ef67f26fc5a8bb74c73d7735eb141c12]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8d0fa925414fc01b168f72398e6bcc5ef67f26fc5a8bb74c73d7735eb141c12" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8d0fa925414fc01b168f72398e6bcc5ef67f26fc5a8bb74c73d7735eb141c12": Process exited with status 1
	stdout:
	
	stderr:
	E0530 21:30:47.178125    3559 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d8d0fa925414fc01b168f72398e6bcc5ef67f26fc5a8bb74c73d7735eb141c12\": not found" containerID="d8d0fa925414fc01b168f72398e6bcc5ef67f26fc5a8bb74c73d7735eb141c12"
	time="2023-05-30T21:30:47Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"d8d0fa925414fc01b168f72398e6bcc5ef67f26fc5a8bb74c73d7735eb141c12\": not found"
	 output: 
	** stderr ** 
	E0530 21:30:47.178125    3559 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d8d0fa925414fc01b168f72398e6bcc5ef67f26fc5a8bb74c73d7735eb141c12\": not found" containerID="d8d0fa925414fc01b168f72398e6bcc5ef67f26fc5a8bb74c73d7735eb141c12"
	time="2023-05-30T21:30:47Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"d8d0fa925414fc01b168f72398e6bcc5ef67f26fc5a8bb74c73d7735eb141c12\": not found"
	
	** /stderr **
	I0530 21:30:47.181269 2415330 logs.go:123] Gathering logs for kube-apiserver [260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08] ...
	I0530 21:30:47.181286 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08"
	I0530 21:30:47.221746 2415330 logs.go:123] Gathering logs for kube-controller-manager [1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e] ...
	I0530 21:30:47.221780 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e"
	I0530 21:30:47.255649 2415330 logs.go:123] Gathering logs for kube-controller-manager [dea43e60aea7baa612a6700d77173899a521044803a652fbfc2ab1269d27db62] ...
	I0530 21:30:47.255677 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dea43e60aea7baa612a6700d77173899a521044803a652fbfc2ab1269d27db62"
	I0530 21:30:49.805394 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:30:49.805803 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:30:49.805845 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0530 21:30:49.805902 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0530 21:30:49.838955 2415330 cri.go:88] found id: "260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08"
	I0530 21:30:49.838976 2415330 cri.go:88] found id: ""
	I0530 21:30:49.838983 2415330 logs.go:284] 1 containers: [260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08]
	I0530 21:30:49.839042 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:30:49.843857 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0530 21:30:49.843920 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0530 21:30:49.875327 2415330 cri.go:88] found id: "38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061"
	I0530 21:30:49.875346 2415330 cri.go:88] found id: ""
	I0530 21:30:49.875354 2415330 logs.go:284] 1 containers: [38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061]
	I0530 21:30:49.875418 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:30:49.880189 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0530 21:30:49.880251 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0530 21:30:49.919610 2415330 cri.go:88] found id: ""
	I0530 21:30:49.919633 2415330 logs.go:284] 0 containers: []
	W0530 21:30:49.919641 2415330 logs.go:286] No container was found matching "coredns"
	I0530 21:30:49.919647 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0530 21:30:49.919717 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0530 21:30:49.955374 2415330 cri.go:88] found id: "7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:30:49.955406 2415330 cri.go:88] found id: ""
	I0530 21:30:49.955414 2415330 logs.go:284] 1 containers: [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f]
	I0530 21:30:49.955485 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:30:49.960374 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0530 21:30:49.960451 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0530 21:30:49.995006 2415330 cri.go:88] found id: ""
	I0530 21:30:49.995026 2415330 logs.go:284] 0 containers: []
	W0530 21:30:49.995034 2415330 logs.go:286] No container was found matching "kube-proxy"
	I0530 21:30:49.995040 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0530 21:30:49.995106 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0530 21:30:50.045055 2415330 cri.go:88] found id: "1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e"
	I0530 21:30:50.045075 2415330 cri.go:88] found id: ""
	I0530 21:30:50.045082 2415330 logs.go:284] 1 containers: [1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e]
	I0530 21:30:50.045135 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:30:50.050022 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0530 21:30:50.050087 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0530 21:30:50.087798 2415330 cri.go:88] found id: ""
	I0530 21:30:50.087831 2415330 logs.go:284] 0 containers: []
	W0530 21:30:50.087840 2415330 logs.go:286] No container was found matching "kindnet"
	I0530 21:30:50.087848 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0530 21:30:50.087911 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0530 21:30:50.125713 2415330 cri.go:88] found id: ""
	I0530 21:30:50.125750 2415330 logs.go:284] 0 containers: []
	W0530 21:30:50.125759 2415330 logs.go:286] No container was found matching "storage-provisioner"
	I0530 21:30:50.125772 2415330 logs.go:123] Gathering logs for describe nodes ...
	I0530 21:30:50.125785 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0530 21:30:50.215189 2415330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0530 21:30:50.215209 2415330 logs.go:123] Gathering logs for kube-apiserver [260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08] ...
	I0530 21:30:50.215224 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08"
	I0530 21:30:50.249731 2415330 logs.go:123] Gathering logs for etcd [38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061] ...
	I0530 21:30:50.249762 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061"
	I0530 21:30:50.281546 2415330 logs.go:123] Gathering logs for kube-controller-manager [1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e] ...
	I0530 21:30:50.281576 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e"
	I0530 21:30:50.331351 2415330 logs.go:123] Gathering logs for containerd ...
	I0530 21:30:50.331384 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0530 21:30:50.395904 2415330 logs.go:123] Gathering logs for kubelet ...
	I0530 21:30:50.395945 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0530 21:30:50.471820 2415330 logs.go:123] Gathering logs for dmesg ...
	I0530 21:30:50.471856 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0530 21:30:50.493725 2415330 logs.go:123] Gathering logs for kube-scheduler [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f] ...
	I0530 21:30:50.493753 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:30:50.553220 2415330 logs.go:123] Gathering logs for container status ...
	I0530 21:30:50.553254 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0530 21:30:53.090133 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:30:53.090662 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:30:53.090708 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0530 21:30:53.090772 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0530 21:30:53.121421 2415330 cri.go:88] found id: "260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08"
	I0530 21:30:53.121444 2415330 cri.go:88] found id: ""
	I0530 21:30:53.121452 2415330 logs.go:284] 1 containers: [260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08]
	I0530 21:30:53.121508 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:30:53.126247 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0530 21:30:53.126322 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0530 21:30:53.158364 2415330 cri.go:88] found id: "38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061"
	I0530 21:30:53.158383 2415330 cri.go:88] found id: ""
	I0530 21:30:53.158390 2415330 logs.go:284] 1 containers: [38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061]
	I0530 21:30:53.158444 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:30:53.162872 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0530 21:30:53.162985 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0530 21:30:53.192671 2415330 cri.go:88] found id: ""
	I0530 21:30:53.192702 2415330 logs.go:284] 0 containers: []
	W0530 21:30:53.192711 2415330 logs.go:286] No container was found matching "coredns"
	I0530 21:30:53.192718 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0530 21:30:53.192786 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0530 21:30:53.223855 2415330 cri.go:88] found id: "7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:30:53.223875 2415330 cri.go:88] found id: ""
	I0530 21:30:53.223882 2415330 logs.go:284] 1 containers: [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f]
	I0530 21:30:53.223937 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:30:53.228351 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0530 21:30:53.228418 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0530 21:30:53.258809 2415330 cri.go:88] found id: ""
	I0530 21:30:53.258884 2415330 logs.go:284] 0 containers: []
	W0530 21:30:53.258904 2415330 logs.go:286] No container was found matching "kube-proxy"
	I0530 21:30:53.258912 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0530 21:30:53.258972 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0530 21:30:53.289752 2415330 cri.go:88] found id: "1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e"
	I0530 21:30:53.289778 2415330 cri.go:88] found id: ""
	I0530 21:30:53.289795 2415330 logs.go:284] 1 containers: [1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e]
	I0530 21:30:53.289851 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:30:53.294557 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0530 21:30:53.294629 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0530 21:30:53.324749 2415330 cri.go:88] found id: ""
	I0530 21:30:53.324817 2415330 logs.go:284] 0 containers: []
	W0530 21:30:53.324837 2415330 logs.go:286] No container was found matching "kindnet"
	I0530 21:30:53.324863 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0530 21:30:53.324970 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0530 21:30:53.358263 2415330 cri.go:88] found id: ""
	I0530 21:30:53.358289 2415330 logs.go:284] 0 containers: []
	W0530 21:30:53.358297 2415330 logs.go:286] No container was found matching "storage-provisioner"
	I0530 21:30:53.358313 2415330 logs.go:123] Gathering logs for kube-scheduler [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f] ...
	I0530 21:30:53.358329 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:30:53.417974 2415330 logs.go:123] Gathering logs for kube-controller-manager [1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e] ...
	I0530 21:30:53.418010 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e"
	I0530 21:30:53.465989 2415330 logs.go:123] Gathering logs for containerd ...
	I0530 21:30:53.466023 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0530 21:30:53.525946 2415330 logs.go:123] Gathering logs for kubelet ...
	I0530 21:30:53.525981 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0530 21:30:53.596934 2415330 logs.go:123] Gathering logs for dmesg ...
	I0530 21:30:53.596970 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0530 21:30:53.617739 2415330 logs.go:123] Gathering logs for kube-apiserver [260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08] ...
	I0530 21:30:53.617770 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08"
	I0530 21:30:53.651604 2415330 logs.go:123] Gathering logs for describe nodes ...
	I0530 21:30:53.651634 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0530 21:30:53.736176 2415330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0530 21:30:53.736197 2415330 logs.go:123] Gathering logs for etcd [38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061] ...
	I0530 21:30:53.736209 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061"
	I0530 21:30:53.768168 2415330 logs.go:123] Gathering logs for container status ...
	I0530 21:30:53.768196 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0530 21:30:56.310301 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:30:56.310786 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:30:56.310834 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0530 21:30:56.310894 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0530 21:30:56.345144 2415330 cri.go:88] found id: "260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08"
	I0530 21:30:56.345169 2415330 cri.go:88] found id: ""
	I0530 21:30:56.345176 2415330 logs.go:284] 1 containers: [260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08]
	I0530 21:30:56.345241 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:30:56.350281 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0530 21:30:56.350356 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0530 21:30:56.384722 2415330 cri.go:88] found id: "38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061"
	I0530 21:30:56.384743 2415330 cri.go:88] found id: ""
	I0530 21:30:56.384751 2415330 logs.go:284] 1 containers: [38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061]
	I0530 21:30:56.384811 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:30:56.390683 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0530 21:30:56.390760 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0530 21:30:56.427207 2415330 cri.go:88] found id: ""
	I0530 21:30:56.427229 2415330 logs.go:284] 0 containers: []
	W0530 21:30:56.427237 2415330 logs.go:286] No container was found matching "coredns"
	I0530 21:30:56.427243 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0530 21:30:56.427310 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0530 21:30:56.459323 2415330 cri.go:88] found id: "7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:30:56.459346 2415330 cri.go:88] found id: ""
	I0530 21:30:56.459354 2415330 logs.go:284] 1 containers: [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f]
	I0530 21:30:56.459410 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:30:56.464144 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0530 21:30:56.464217 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0530 21:30:56.495166 2415330 cri.go:88] found id: ""
	I0530 21:30:56.495238 2415330 logs.go:284] 0 containers: []
	W0530 21:30:56.495272 2415330 logs.go:286] No container was found matching "kube-proxy"
	I0530 21:30:56.495280 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0530 21:30:56.495353 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0530 21:30:56.527734 2415330 cri.go:88] found id: "1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e"
	I0530 21:30:56.527799 2415330 cri.go:88] found id: ""
	I0530 21:30:56.527822 2415330 logs.go:284] 1 containers: [1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e]
	I0530 21:30:56.527906 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:30:56.532834 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0530 21:30:56.532956 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0530 21:30:56.566847 2415330 cri.go:88] found id: ""
	I0530 21:30:56.566871 2415330 logs.go:284] 0 containers: []
	W0530 21:30:56.566880 2415330 logs.go:286] No container was found matching "kindnet"
	I0530 21:30:56.566886 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0530 21:30:56.566965 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0530 21:30:56.597974 2415330 cri.go:88] found id: ""
	I0530 21:30:56.597994 2415330 logs.go:284] 0 containers: []
	W0530 21:30:56.598002 2415330 logs.go:286] No container was found matching "storage-provisioner"
	I0530 21:30:56.598016 2415330 logs.go:123] Gathering logs for dmesg ...
	I0530 21:30:56.598030 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0530 21:30:56.619710 2415330 logs.go:123] Gathering logs for etcd [38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061] ...
	I0530 21:30:56.619742 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061"
	I0530 21:30:56.652688 2415330 logs.go:123] Gathering logs for kube-scheduler [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f] ...
	I0530 21:30:56.652770 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:30:56.713957 2415330 logs.go:123] Gathering logs for containerd ...
	I0530 21:30:56.713991 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0530 21:30:56.777700 2415330 logs.go:123] Gathering logs for kubelet ...
	I0530 21:30:56.777735 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0530 21:30:56.866441 2415330 logs.go:123] Gathering logs for describe nodes ...
	I0530 21:30:56.866488 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0530 21:30:56.960588 2415330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0530 21:30:56.960610 2415330 logs.go:123] Gathering logs for kube-apiserver [260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08] ...
	I0530 21:30:56.960625 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08"
	I0530 21:30:56.994942 2415330 logs.go:123] Gathering logs for kube-controller-manager [1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e] ...
	I0530 21:30:56.994973 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e"
	I0530 21:30:57.043414 2415330 logs.go:123] Gathering logs for container status ...
	I0530 21:30:57.043450 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0530 21:30:59.579337 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:30:59.579840 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:30:59.579899 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0530 21:30:59.579958 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0530 21:30:59.611641 2415330 cri.go:88] found id: "260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08"
	I0530 21:30:59.611662 2415330 cri.go:88] found id: ""
	I0530 21:30:59.611670 2415330 logs.go:284] 1 containers: [260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08]
	I0530 21:30:59.611725 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:30:59.616250 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0530 21:30:59.616317 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0530 21:30:59.652001 2415330 cri.go:88] found id: "38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061"
	I0530 21:30:59.652024 2415330 cri.go:88] found id: ""
	I0530 21:30:59.652031 2415330 logs.go:284] 1 containers: [38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061]
	I0530 21:30:59.652088 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:30:59.656928 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0530 21:30:59.657009 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0530 21:30:59.688341 2415330 cri.go:88] found id: ""
	I0530 21:30:59.688364 2415330 logs.go:284] 0 containers: []
	W0530 21:30:59.688374 2415330 logs.go:286] No container was found matching "coredns"
	I0530 21:30:59.688380 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0530 21:30:59.688441 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0530 21:30:59.719551 2415330 cri.go:88] found id: "7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:30:59.719570 2415330 cri.go:88] found id: ""
	I0530 21:30:59.719577 2415330 logs.go:284] 1 containers: [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f]
	I0530 21:30:59.719631 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:30:59.724371 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0530 21:30:59.724441 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0530 21:30:59.755384 2415330 cri.go:88] found id: ""
	I0530 21:30:59.755402 2415330 logs.go:284] 0 containers: []
	W0530 21:30:59.755410 2415330 logs.go:286] No container was found matching "kube-proxy"
	I0530 21:30:59.755416 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0530 21:30:59.755482 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0530 21:30:59.790736 2415330 cri.go:88] found id: "1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e"
	I0530 21:30:59.790756 2415330 cri.go:88] found id: ""
	I0530 21:30:59.790763 2415330 logs.go:284] 1 containers: [1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e]
	I0530 21:30:59.790819 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:30:59.795359 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0530 21:30:59.795477 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0530 21:30:59.826739 2415330 cri.go:88] found id: ""
	I0530 21:30:59.826763 2415330 logs.go:284] 0 containers: []
	W0530 21:30:59.826771 2415330 logs.go:286] No container was found matching "kindnet"
	I0530 21:30:59.826777 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0530 21:30:59.826879 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0530 21:30:59.858540 2415330 cri.go:88] found id: ""
	I0530 21:30:59.858560 2415330 logs.go:284] 0 containers: []
	W0530 21:30:59.858568 2415330 logs.go:286] No container was found matching "storage-provisioner"
	I0530 21:30:59.858583 2415330 logs.go:123] Gathering logs for etcd [38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061] ...
	I0530 21:30:59.858595 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061"
	I0530 21:30:59.889951 2415330 logs.go:123] Gathering logs for containerd ...
	I0530 21:30:59.889975 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0530 21:30:59.951636 2415330 logs.go:123] Gathering logs for container status ...
	I0530 21:30:59.951676 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0530 21:30:59.987592 2415330 logs.go:123] Gathering logs for kubelet ...
	I0530 21:30:59.987670 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0530 21:31:00.149276 2415330 logs.go:123] Gathering logs for dmesg ...
	I0530 21:31:00.149346 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0530 21:31:00.174374 2415330 logs.go:123] Gathering logs for describe nodes ...
	I0530 21:31:00.174415 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0530 21:31:00.268128 2415330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0530 21:31:00.268151 2415330 logs.go:123] Gathering logs for kube-apiserver [260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08] ...
	I0530 21:31:00.268164 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08"
	I0530 21:31:00.305122 2415330 logs.go:123] Gathering logs for kube-scheduler [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f] ...
	I0530 21:31:00.305153 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:31:00.369152 2415330 logs.go:123] Gathering logs for kube-controller-manager [1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e] ...
	I0530 21:31:00.369188 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e"
	I0530 21:31:02.918981 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:31:02.919377 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:31:02.919423 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0530 21:31:02.919513 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0530 21:31:02.950454 2415330 cri.go:88] found id: "260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08"
	I0530 21:31:02.950482 2415330 cri.go:88] found id: ""
	I0530 21:31:02.950489 2415330 logs.go:284] 1 containers: [260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08]
	I0530 21:31:02.950555 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:31:02.955160 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0530 21:31:02.955236 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0530 21:31:02.992345 2415330 cri.go:88] found id: "38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061"
	I0530 21:31:02.992377 2415330 cri.go:88] found id: ""
	I0530 21:31:02.992384 2415330 logs.go:284] 1 containers: [38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061]
	I0530 21:31:02.992441 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:31:02.996966 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0530 21:31:02.997035 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0530 21:31:03.028865 2415330 cri.go:88] found id: ""
	I0530 21:31:03.028888 2415330 logs.go:284] 0 containers: []
	W0530 21:31:03.028896 2415330 logs.go:286] No container was found matching "coredns"
	I0530 21:31:03.028903 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0530 21:31:03.028961 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0530 21:31:03.059512 2415330 cri.go:88] found id: "7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:31:03.059533 2415330 cri.go:88] found id: ""
	I0530 21:31:03.059540 2415330 logs.go:284] 1 containers: [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f]
	I0530 21:31:03.059598 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:31:03.064246 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0530 21:31:03.064323 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0530 21:31:03.094838 2415330 cri.go:88] found id: ""
	I0530 21:31:03.094861 2415330 logs.go:284] 0 containers: []
	W0530 21:31:03.094879 2415330 logs.go:286] No container was found matching "kube-proxy"
	I0530 21:31:03.094886 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0530 21:31:03.094943 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0530 21:31:03.126275 2415330 cri.go:88] found id: "1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e"
	I0530 21:31:03.126300 2415330 cri.go:88] found id: ""
	I0530 21:31:03.126308 2415330 logs.go:284] 1 containers: [1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e]
	I0530 21:31:03.126364 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:31:03.131167 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0530 21:31:03.131245 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0530 21:31:03.160901 2415330 cri.go:88] found id: ""
	I0530 21:31:03.160922 2415330 logs.go:284] 0 containers: []
	W0530 21:31:03.160930 2415330 logs.go:286] No container was found matching "kindnet"
	I0530 21:31:03.160937 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0530 21:31:03.161023 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0530 21:31:03.192848 2415330 cri.go:88] found id: ""
	I0530 21:31:03.192877 2415330 logs.go:284] 0 containers: []
	W0530 21:31:03.192888 2415330 logs.go:286] No container was found matching "storage-provisioner"
	I0530 21:31:03.192903 2415330 logs.go:123] Gathering logs for describe nodes ...
	I0530 21:31:03.192917 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0530 21:31:03.280445 2415330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0530 21:31:03.280465 2415330 logs.go:123] Gathering logs for etcd [38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061] ...
	I0530 21:31:03.280479 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061"
	I0530 21:31:03.311508 2415330 logs.go:123] Gathering logs for kube-controller-manager [1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e] ...
	I0530 21:31:03.311574 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e"
	I0530 21:31:03.363017 2415330 logs.go:123] Gathering logs for container status ...
	I0530 21:31:03.363052 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0530 21:31:03.402624 2415330 logs.go:123] Gathering logs for kubelet ...
	I0530 21:31:03.402652 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0530 21:31:03.480311 2415330 logs.go:123] Gathering logs for dmesg ...
	I0530 21:31:03.480349 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0530 21:31:03.502500 2415330 logs.go:123] Gathering logs for kube-apiserver [260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08] ...
	I0530 21:31:03.502528 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08"
	I0530 21:31:03.547941 2415330 logs.go:123] Gathering logs for kube-scheduler [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f] ...
	I0530 21:31:03.547971 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:31:03.626772 2415330 logs.go:123] Gathering logs for containerd ...
	I0530 21:31:03.626807 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0530 21:31:06.190902 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:31:06.191333 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:31:06.191383 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0530 21:31:06.191439 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0530 21:31:06.221472 2415330 cri.go:88] found id: "260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08"
	I0530 21:31:06.221491 2415330 cri.go:88] found id: ""
	I0530 21:31:06.221499 2415330 logs.go:284] 1 containers: [260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08]
	I0530 21:31:06.221557 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:31:06.226180 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0530 21:31:06.226271 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0530 21:31:06.257648 2415330 cri.go:88] found id: "38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061"
	I0530 21:31:06.257668 2415330 cri.go:88] found id: ""
	I0530 21:31:06.257678 2415330 logs.go:284] 1 containers: [38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061]
	I0530 21:31:06.257733 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:31:06.262244 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0530 21:31:06.262311 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0530 21:31:06.293271 2415330 cri.go:88] found id: ""
	I0530 21:31:06.293291 2415330 logs.go:284] 0 containers: []
	W0530 21:31:06.293335 2415330 logs.go:286] No container was found matching "coredns"
	I0530 21:31:06.293342 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0530 21:31:06.293401 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0530 21:31:06.323578 2415330 cri.go:88] found id: "7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:31:06.323601 2415330 cri.go:88] found id: ""
	I0530 21:31:06.323609 2415330 logs.go:284] 1 containers: [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f]
	I0530 21:31:06.323667 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:31:06.328353 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0530 21:31:06.328454 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0530 21:31:06.361353 2415330 cri.go:88] found id: ""
	I0530 21:31:06.361373 2415330 logs.go:284] 0 containers: []
	W0530 21:31:06.361381 2415330 logs.go:286] No container was found matching "kube-proxy"
	I0530 21:31:06.361388 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0530 21:31:06.361454 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0530 21:31:06.394191 2415330 cri.go:88] found id: "1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e"
	I0530 21:31:06.394213 2415330 cri.go:88] found id: ""
	I0530 21:31:06.394221 2415330 logs.go:284] 1 containers: [1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e]
	I0530 21:31:06.394280 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:31:06.399002 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0530 21:31:06.399078 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0530 21:31:06.430046 2415330 cri.go:88] found id: ""
	I0530 21:31:06.430068 2415330 logs.go:284] 0 containers: []
	W0530 21:31:06.430076 2415330 logs.go:286] No container was found matching "kindnet"
	I0530 21:31:06.430082 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0530 21:31:06.430183 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0530 21:31:06.462073 2415330 cri.go:88] found id: ""
	I0530 21:31:06.462094 2415330 logs.go:284] 0 containers: []
	W0530 21:31:06.462101 2415330 logs.go:286] No container was found matching "storage-provisioner"
	I0530 21:31:06.462114 2415330 logs.go:123] Gathering logs for kubelet ...
	I0530 21:31:06.462127 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0530 21:31:06.537744 2415330 logs.go:123] Gathering logs for dmesg ...
	I0530 21:31:06.537798 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0530 21:31:06.559135 2415330 logs.go:123] Gathering logs for kube-apiserver [260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08] ...
	I0530 21:31:06.559163 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08"
	I0530 21:31:06.593926 2415330 logs.go:123] Gathering logs for etcd [38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061] ...
	I0530 21:31:06.593955 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061"
	I0530 21:31:06.624789 2415330 logs.go:123] Gathering logs for kube-scheduler [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f] ...
	I0530 21:31:06.624827 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:31:06.684953 2415330 logs.go:123] Gathering logs for kube-controller-manager [1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e] ...
	I0530 21:31:06.684988 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e"
	I0530 21:31:06.733916 2415330 logs.go:123] Gathering logs for containerd ...
	I0530 21:31:06.733996 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0530 21:31:06.796230 2415330 logs.go:123] Gathering logs for describe nodes ...
	I0530 21:31:06.796265 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0530 21:31:06.883691 2415330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0530 21:31:06.883714 2415330 logs.go:123] Gathering logs for container status ...
	I0530 21:31:06.883728 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0530 21:31:09.420866 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:31:09.421349 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:31:09.421391 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0530 21:31:09.421451 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0530 21:31:09.451608 2415330 cri.go:88] found id: "260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08"
	I0530 21:31:09.451671 2415330 cri.go:88] found id: ""
	I0530 21:31:09.451691 2415330 logs.go:284] 1 containers: [260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08]
	I0530 21:31:09.451766 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:31:09.456152 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0530 21:31:09.456226 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0530 21:31:09.487056 2415330 cri.go:88] found id: "38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061"
	I0530 21:31:09.487080 2415330 cri.go:88] found id: ""
	I0530 21:31:09.487088 2415330 logs.go:284] 1 containers: [38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061]
	I0530 21:31:09.487170 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:31:09.491786 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0530 21:31:09.491876 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0530 21:31:09.522639 2415330 cri.go:88] found id: ""
	I0530 21:31:09.522704 2415330 logs.go:284] 0 containers: []
	W0530 21:31:09.522726 2415330 logs.go:286] No container was found matching "coredns"
	I0530 21:31:09.522747 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0530 21:31:09.522846 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0530 21:31:09.554901 2415330 cri.go:88] found id: "7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:31:09.554963 2415330 cri.go:88] found id: ""
	I0530 21:31:09.554984 2415330 logs.go:284] 1 containers: [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f]
	I0530 21:31:09.555054 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:31:09.559624 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0530 21:31:09.559730 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0530 21:31:09.589624 2415330 cri.go:88] found id: ""
	I0530 21:31:09.589684 2415330 logs.go:284] 0 containers: []
	W0530 21:31:09.589706 2415330 logs.go:286] No container was found matching "kube-proxy"
	I0530 21:31:09.589725 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0530 21:31:09.589813 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0530 21:31:09.621038 2415330 cri.go:88] found id: "1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e"
	I0530 21:31:09.621060 2415330 cri.go:88] found id: ""
	I0530 21:31:09.621068 2415330 logs.go:284] 1 containers: [1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e]
	I0530 21:31:09.621124 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:31:09.625648 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0530 21:31:09.625740 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0530 21:31:09.656171 2415330 cri.go:88] found id: ""
	I0530 21:31:09.656206 2415330 logs.go:284] 0 containers: []
	W0530 21:31:09.656215 2415330 logs.go:286] No container was found matching "kindnet"
	I0530 21:31:09.656241 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0530 21:31:09.656324 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0530 21:31:09.687762 2415330 cri.go:88] found id: ""
	I0530 21:31:09.687828 2415330 logs.go:284] 0 containers: []
	W0530 21:31:09.687850 2415330 logs.go:286] No container was found matching "storage-provisioner"
	I0530 21:31:09.687877 2415330 logs.go:123] Gathering logs for etcd [38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061] ...
	I0530 21:31:09.687913 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061"
	I0530 21:31:09.720681 2415330 logs.go:123] Gathering logs for kube-controller-manager [1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e] ...
	I0530 21:31:09.720707 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e"
	I0530 21:31:09.768625 2415330 logs.go:123] Gathering logs for containerd ...
	I0530 21:31:09.768656 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0530 21:31:09.831212 2415330 logs.go:123] Gathering logs for kube-apiserver [260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08] ...
	I0530 21:31:09.831254 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08"
	I0530 21:31:09.865581 2415330 logs.go:123] Gathering logs for dmesg ...
	I0530 21:31:09.865610 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0530 21:31:09.889281 2415330 logs.go:123] Gathering logs for describe nodes ...
	I0530 21:31:09.889337 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0530 21:31:09.974445 2415330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0530 21:31:09.974467 2415330 logs.go:123] Gathering logs for kube-scheduler [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f] ...
	I0530 21:31:09.974480 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:31:10.034791 2415330 logs.go:123] Gathering logs for container status ...
	I0530 21:31:10.034867 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0530 21:31:10.070066 2415330 logs.go:123] Gathering logs for kubelet ...
	I0530 21:31:10.070099 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0530 21:31:12.663867 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:31:12.664228 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:31:12.664276 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0530 21:31:12.664336 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0530 21:31:12.709754 2415330 cri.go:88] found id: "260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08"
	I0530 21:31:12.709778 2415330 cri.go:88] found id: ""
	I0530 21:31:12.709785 2415330 logs.go:284] 1 containers: [260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08]
	I0530 21:31:12.709846 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:31:12.715100 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0530 21:31:12.715194 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0530 21:31:12.757864 2415330 cri.go:88] found id: "38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061"
	I0530 21:31:12.757888 2415330 cri.go:88] found id: ""
	I0530 21:31:12.757895 2415330 logs.go:284] 1 containers: [38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061]
	I0530 21:31:12.757952 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:31:12.762968 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0530 21:31:12.763042 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0530 21:31:12.802942 2415330 cri.go:88] found id: ""
	I0530 21:31:12.802965 2415330 logs.go:284] 0 containers: []
	W0530 21:31:12.802974 2415330 logs.go:286] No container was found matching "coredns"
	I0530 21:31:12.802980 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0530 21:31:12.803040 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0530 21:31:12.845820 2415330 cri.go:88] found id: "7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:31:12.845843 2415330 cri.go:88] found id: ""
	I0530 21:31:12.845851 2415330 logs.go:284] 1 containers: [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f]
	I0530 21:31:12.845910 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:31:12.851235 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0530 21:31:12.851311 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0530 21:31:12.890897 2415330 cri.go:88] found id: ""
	I0530 21:31:12.890920 2415330 logs.go:284] 0 containers: []
	W0530 21:31:12.890929 2415330 logs.go:286] No container was found matching "kube-proxy"
	I0530 21:31:12.890936 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0530 21:31:12.890997 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0530 21:31:12.940626 2415330 cri.go:88] found id: "1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e"
	I0530 21:31:12.940660 2415330 cri.go:88] found id: ""
	I0530 21:31:12.940668 2415330 logs.go:284] 1 containers: [1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e]
	I0530 21:31:12.940749 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:31:12.947180 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0530 21:31:12.947264 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0530 21:31:12.994140 2415330 cri.go:88] found id: ""
	I0530 21:31:12.994164 2415330 logs.go:284] 0 containers: []
	W0530 21:31:12.994180 2415330 logs.go:286] No container was found matching "kindnet"
	I0530 21:31:12.994187 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0530 21:31:12.994260 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0530 21:31:13.034670 2415330 cri.go:88] found id: ""
	I0530 21:31:13.034693 2415330 logs.go:284] 0 containers: []
	W0530 21:31:13.034702 2415330 logs.go:286] No container was found matching "storage-provisioner"
	I0530 21:31:13.034748 2415330 logs.go:123] Gathering logs for kubelet ...
	I0530 21:31:13.034768 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0530 21:31:13.124496 2415330 logs.go:123] Gathering logs for describe nodes ...
	I0530 21:31:13.124534 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0530 21:31:13.240930 2415330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0530 21:31:13.240951 2415330 logs.go:123] Gathering logs for kube-scheduler [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f] ...
	I0530 21:31:13.240964 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:31:13.302895 2415330 logs.go:123] Gathering logs for kube-controller-manager [1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e] ...
	I0530 21:31:13.302953 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e"
	I0530 21:31:13.365860 2415330 logs.go:123] Gathering logs for dmesg ...
	I0530 21:31:13.365912 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0530 21:31:13.388516 2415330 logs.go:123] Gathering logs for kube-apiserver [260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08] ...
	I0530 21:31:13.388546 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08"
	I0530 21:31:13.432001 2415330 logs.go:123] Gathering logs for etcd [38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061] ...
	I0530 21:31:13.432034 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061"
	I0530 21:31:13.476721 2415330 logs.go:123] Gathering logs for containerd ...
	I0530 21:31:13.476776 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0530 21:31:13.553113 2415330 logs.go:123] Gathering logs for container status ...
	I0530 21:31:13.553188 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0530 21:31:16.108420 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:31:16.108783 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:31:16.108844 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0530 21:31:16.108905 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0530 21:31:16.156775 2415330 cri.go:88] found id: "260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08"
	I0530 21:31:16.156804 2415330 cri.go:88] found id: ""
	I0530 21:31:16.156811 2415330 logs.go:284] 1 containers: [260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08]
	I0530 21:31:16.156887 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:31:16.163893 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0530 21:31:16.163996 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0530 21:31:16.212066 2415330 cri.go:88] found id: "38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061"
	I0530 21:31:16.212094 2415330 cri.go:88] found id: ""
	I0530 21:31:16.212104 2415330 logs.go:284] 1 containers: [38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061]
	I0530 21:31:16.212163 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:31:16.217606 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0530 21:31:16.217708 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0530 21:31:16.256377 2415330 cri.go:88] found id: ""
	I0530 21:31:16.256403 2415330 logs.go:284] 0 containers: []
	W0530 21:31:16.256437 2415330 logs.go:286] No container was found matching "coredns"
	I0530 21:31:16.256451 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0530 21:31:16.256532 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0530 21:31:16.304683 2415330 cri.go:88] found id: "7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:31:16.304730 2415330 cri.go:88] found id: ""
	I0530 21:31:16.304738 2415330 logs.go:284] 1 containers: [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f]
	I0530 21:31:16.304847 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:31:16.310253 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0530 21:31:16.310369 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0530 21:31:16.358858 2415330 cri.go:88] found id: ""
	I0530 21:31:16.358887 2415330 logs.go:284] 0 containers: []
	W0530 21:31:16.358895 2415330 logs.go:286] No container was found matching "kube-proxy"
	I0530 21:31:16.358920 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0530 21:31:16.359001 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0530 21:31:16.399142 2415330 cri.go:88] found id: "1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e"
	I0530 21:31:16.399171 2415330 cri.go:88] found id: ""
	I0530 21:31:16.399178 2415330 logs.go:284] 1 containers: [1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e]
	I0530 21:31:16.399269 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:31:16.406311 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0530 21:31:16.406428 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0530 21:31:16.447754 2415330 cri.go:88] found id: ""
	I0530 21:31:16.447783 2415330 logs.go:284] 0 containers: []
	W0530 21:31:16.447791 2415330 logs.go:286] No container was found matching "kindnet"
	I0530 21:31:16.447820 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0530 21:31:16.447897 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0530 21:31:16.496947 2415330 cri.go:88] found id: ""
	I0530 21:31:16.496973 2415330 logs.go:284] 0 containers: []
	W0530 21:31:16.497000 2415330 logs.go:286] No container was found matching "storage-provisioner"
	I0530 21:31:16.497030 2415330 logs.go:123] Gathering logs for kubelet ...
	I0530 21:31:16.497047 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0530 21:31:16.609245 2415330 logs.go:123] Gathering logs for dmesg ...
	I0530 21:31:16.609291 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0530 21:31:16.638261 2415330 logs.go:123] Gathering logs for kube-scheduler [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f] ...
	I0530 21:31:16.638343 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:31:16.766259 2415330 logs.go:123] Gathering logs for containerd ...
	I0530 21:31:16.766308 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0530 21:31:16.838150 2415330 logs.go:123] Gathering logs for describe nodes ...
	I0530 21:31:16.838206 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0530 21:31:16.969960 2415330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0530 21:31:16.969989 2415330 logs.go:123] Gathering logs for kube-apiserver [260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08] ...
	I0530 21:31:16.970001 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08"
	I0530 21:31:17.024890 2415330 logs.go:123] Gathering logs for etcd [38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061] ...
	I0530 21:31:17.024923 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061"
	I0530 21:31:17.065070 2415330 logs.go:123] Gathering logs for kube-controller-manager [1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e] ...
	I0530 21:31:17.065110 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e"
	I0530 21:31:17.123338 2415330 logs.go:123] Gathering logs for container status ...
	I0530 21:31:17.123408 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0530 21:31:19.668143 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:31:19.668508 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:31:19.668554 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0530 21:31:19.668612 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0530 21:31:19.719899 2415330 cri.go:88] found id: "260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08"
	I0530 21:31:19.719922 2415330 cri.go:88] found id: ""
	I0530 21:31:19.719929 2415330 logs.go:284] 1 containers: [260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08]
	I0530 21:31:19.719987 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:31:19.725882 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0530 21:31:19.725961 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0530 21:31:19.787784 2415330 cri.go:88] found id: "38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061"
	I0530 21:31:19.787803 2415330 cri.go:88] found id: ""
	I0530 21:31:19.787810 2415330 logs.go:284] 1 containers: [38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061]
	I0530 21:31:19.787873 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:31:19.797164 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0530 21:31:19.797231 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0530 21:31:19.844600 2415330 cri.go:88] found id: ""
	I0530 21:31:19.844618 2415330 logs.go:284] 0 containers: []
	W0530 21:31:19.844626 2415330 logs.go:286] No container was found matching "coredns"
	I0530 21:31:19.844633 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0530 21:31:19.844692 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0530 21:31:19.906350 2415330 cri.go:88] found id: "7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:31:19.906425 2415330 cri.go:88] found id: ""
	I0530 21:31:19.906445 2415330 logs.go:284] 1 containers: [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f]
	I0530 21:31:19.906538 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:31:19.915092 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0530 21:31:19.915162 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0530 21:31:19.955734 2415330 cri.go:88] found id: ""
	I0530 21:31:19.955760 2415330 logs.go:284] 0 containers: []
	W0530 21:31:19.955768 2415330 logs.go:286] No container was found matching "kube-proxy"
	I0530 21:31:19.955782 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0530 21:31:19.955845 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0530 21:31:20.015638 2415330 cri.go:88] found id: "1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e"
	I0530 21:31:20.015716 2415330 cri.go:88] found id: ""
	I0530 21:31:20.015737 2415330 logs.go:284] 1 containers: [1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e]
	I0530 21:31:20.015831 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:31:20.027133 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0530 21:31:20.027245 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0530 21:31:20.082807 2415330 cri.go:88] found id: ""
	I0530 21:31:20.082880 2415330 logs.go:284] 0 containers: []
	W0530 21:31:20.082903 2415330 logs.go:286] No container was found matching "kindnet"
	I0530 21:31:20.082921 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0530 21:31:20.083021 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0530 21:31:20.142373 2415330 cri.go:88] found id: ""
	I0530 21:31:20.142438 2415330 logs.go:284] 0 containers: []
	W0530 21:31:20.142458 2415330 logs.go:286] No container was found matching "storage-provisioner"
	I0530 21:31:20.142484 2415330 logs.go:123] Gathering logs for kubelet ...
	I0530 21:31:20.142537 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0530 21:31:20.257506 2415330 logs.go:123] Gathering logs for dmesg ...
	I0530 21:31:20.257578 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0530 21:31:20.300402 2415330 logs.go:123] Gathering logs for kube-scheduler [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f] ...
	I0530 21:31:20.300471 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:31:20.383474 2415330 logs.go:123] Gathering logs for containerd ...
	I0530 21:31:20.383550 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0530 21:31:20.451428 2415330 logs.go:123] Gathering logs for container status ...
	I0530 21:31:20.451541 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0530 21:31:20.509013 2415330 logs.go:123] Gathering logs for describe nodes ...
	I0530 21:31:20.509050 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0530 21:31:20.665772 2415330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0530 21:31:20.665791 2415330 logs.go:123] Gathering logs for kube-apiserver [260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08] ...
	I0530 21:31:20.665804 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08"
	I0530 21:31:20.718786 2415330 logs.go:123] Gathering logs for etcd [38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061] ...
	I0530 21:31:20.718825 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061"
	I0530 21:31:20.757858 2415330 logs.go:123] Gathering logs for kube-controller-manager [1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e] ...
	I0530 21:31:20.757885 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e"
	I0530 21:31:23.357080 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:31:23.357447 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:31:23.357492 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0530 21:31:23.357549 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0530 21:31:23.411871 2415330 cri.go:88] found id: "260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08"
	I0530 21:31:23.411895 2415330 cri.go:88] found id: ""
	I0530 21:31:23.411903 2415330 logs.go:284] 1 containers: [260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08]
	I0530 21:31:23.411959 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:31:23.419508 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0530 21:31:23.419592 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0530 21:31:23.467060 2415330 cri.go:88] found id: "38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061"
	I0530 21:31:23.467084 2415330 cri.go:88] found id: ""
	I0530 21:31:23.467092 2415330 logs.go:284] 1 containers: [38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061]
	I0530 21:31:23.467148 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:31:23.473808 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0530 21:31:23.473886 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0530 21:31:23.520430 2415330 cri.go:88] found id: ""
	I0530 21:31:23.520453 2415330 logs.go:284] 0 containers: []
	W0530 21:31:23.520461 2415330 logs.go:286] No container was found matching "coredns"
	I0530 21:31:23.520467 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0530 21:31:23.520526 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0530 21:31:23.568357 2415330 cri.go:88] found id: "7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:31:23.568380 2415330 cri.go:88] found id: ""
	I0530 21:31:23.568388 2415330 logs.go:284] 1 containers: [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f]
	I0530 21:31:23.568448 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:31:23.576192 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0530 21:31:23.576268 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0530 21:31:23.638619 2415330 cri.go:88] found id: ""
	I0530 21:31:23.638642 2415330 logs.go:284] 0 containers: []
	W0530 21:31:23.638650 2415330 logs.go:286] No container was found matching "kube-proxy"
	I0530 21:31:23.638656 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0530 21:31:23.638717 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0530 21:31:23.688456 2415330 cri.go:88] found id: "1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e"
	I0530 21:31:23.688479 2415330 cri.go:88] found id: ""
	I0530 21:31:23.688487 2415330 logs.go:284] 1 containers: [1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e]
	I0530 21:31:23.688547 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:31:23.698079 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0530 21:31:23.698158 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0530 21:31:23.749215 2415330 cri.go:88] found id: ""
	I0530 21:31:23.749240 2415330 logs.go:284] 0 containers: []
	W0530 21:31:23.749249 2415330 logs.go:286] No container was found matching "kindnet"
	I0530 21:31:23.749256 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0530 21:31:23.749340 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0530 21:31:23.822621 2415330 cri.go:88] found id: ""
	I0530 21:31:23.822643 2415330 logs.go:284] 0 containers: []
	W0530 21:31:23.822651 2415330 logs.go:286] No container was found matching "storage-provisioner"
	I0530 21:31:23.822665 2415330 logs.go:123] Gathering logs for kubelet ...
	I0530 21:31:23.822677 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0530 21:31:23.936618 2415330 logs.go:123] Gathering logs for dmesg ...
	I0530 21:31:23.936658 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0530 21:31:23.970207 2415330 logs.go:123] Gathering logs for describe nodes ...
	I0530 21:31:23.970236 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0530 21:31:24.093441 2415330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0530 21:31:24.093465 2415330 logs.go:123] Gathering logs for kube-apiserver [260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08] ...
	I0530 21:31:24.093478 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08"
	I0530 21:31:24.146696 2415330 logs.go:123] Gathering logs for kube-controller-manager [1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e] ...
	I0530 21:31:24.146734 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e"
	I0530 21:31:24.208554 2415330 logs.go:123] Gathering logs for container status ...
	I0530 21:31:24.208591 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0530 21:31:24.267586 2415330 logs.go:123] Gathering logs for etcd [38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061] ...
	I0530 21:31:24.267620 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061"
	I0530 21:31:24.305753 2415330 logs.go:123] Gathering logs for kube-scheduler [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f] ...
	I0530 21:31:24.305783 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:31:24.369009 2415330 logs.go:123] Gathering logs for containerd ...
	I0530 21:31:24.369046 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0530 21:31:26.949009 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:31:26.949392 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:31:26.949442 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0530 21:31:26.949501 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0530 21:31:26.993127 2415330 cri.go:88] found id: "260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08"
	I0530 21:31:26.993151 2415330 cri.go:88] found id: ""
	I0530 21:31:26.993160 2415330 logs.go:284] 1 containers: [260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08]
	I0530 21:31:26.993217 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:31:26.998103 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0530 21:31:26.998183 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0530 21:31:27.034043 2415330 cri.go:88] found id: "38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061"
	I0530 21:31:27.034067 2415330 cri.go:88] found id: ""
	I0530 21:31:27.034074 2415330 logs.go:284] 1 containers: [38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061]
	I0530 21:31:27.034159 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:31:27.039331 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0530 21:31:27.039432 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0530 21:31:27.073614 2415330 cri.go:88] found id: ""
	I0530 21:31:27.073649 2415330 logs.go:284] 0 containers: []
	W0530 21:31:27.073659 2415330 logs.go:286] No container was found matching "coredns"
	I0530 21:31:27.073691 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0530 21:31:27.073773 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0530 21:31:27.109509 2415330 cri.go:88] found id: "7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:31:27.109541 2415330 cri.go:88] found id: ""
	I0530 21:31:27.109548 2415330 logs.go:284] 1 containers: [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f]
	I0530 21:31:27.109642 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:31:27.116506 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0530 21:31:27.116627 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0530 21:31:27.160395 2415330 cri.go:88] found id: ""
	I0530 21:31:27.160428 2415330 logs.go:284] 0 containers: []
	W0530 21:31:27.160436 2415330 logs.go:286] No container was found matching "kube-proxy"
	I0530 21:31:27.160465 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0530 21:31:27.160544 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0530 21:31:27.211605 2415330 cri.go:88] found id: "1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e"
	I0530 21:31:27.211635 2415330 cri.go:88] found id: ""
	I0530 21:31:27.211643 2415330 logs.go:284] 1 containers: [1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e]
	I0530 21:31:27.211737 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:31:27.219623 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0530 21:31:27.219724 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0530 21:31:27.257687 2415330 cri.go:88] found id: ""
	I0530 21:31:27.257709 2415330 logs.go:284] 0 containers: []
	W0530 21:31:27.257718 2415330 logs.go:286] No container was found matching "kindnet"
	I0530 21:31:27.257725 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0530 21:31:27.257798 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0530 21:31:27.302739 2415330 cri.go:88] found id: ""
	I0530 21:31:27.302761 2415330 logs.go:284] 0 containers: []
	W0530 21:31:27.302770 2415330 logs.go:286] No container was found matching "storage-provisioner"
	I0530 21:31:27.302785 2415330 logs.go:123] Gathering logs for kube-controller-manager [1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e] ...
	I0530 21:31:27.302799 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e"
	I0530 21:31:27.356583 2415330 logs.go:123] Gathering logs for container status ...
	I0530 21:31:27.356619 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0530 21:31:27.395796 2415330 logs.go:123] Gathering logs for kubelet ...
	I0530 21:31:27.395825 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0530 21:31:27.473558 2415330 logs.go:123] Gathering logs for dmesg ...
	I0530 21:31:27.473594 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0530 21:31:27.498563 2415330 logs.go:123] Gathering logs for describe nodes ...
	I0530 21:31:27.498593 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0530 21:31:27.591821 2415330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0530 21:31:27.591844 2415330 logs.go:123] Gathering logs for kube-apiserver [260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08] ...
	I0530 21:31:27.591859 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08"
	I0530 21:31:27.628656 2415330 logs.go:123] Gathering logs for etcd [38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061] ...
	I0530 21:31:27.628683 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061"
	W0530 21:31:27.665256 2415330 logs.go:130] failed etcd [38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061": Process exited with status 1
	stdout:
	
	stderr:
	E0530 21:31:27.661972    5278 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061\": not found" containerID="38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061"
	time="2023-05-30T21:31:27Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061\": not found"
	 output: 
	** stderr ** 
	E0530 21:31:27.661972    5278 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061\": not found" containerID="38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061"
	time="2023-05-30T21:31:27Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"38469b8041f65591b299916aa9429d0959666cb6005ce4ce17981dc21f539061\": not found"
	
	** /stderr **
	I0530 21:31:27.665278 2415330 logs.go:123] Gathering logs for kube-scheduler [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f] ...
	I0530 21:31:27.665292 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:31:27.726077 2415330 logs.go:123] Gathering logs for containerd ...
	I0530 21:31:27.726111 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0530 21:31:30.293606 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:31:30.293991 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:31:30.294042 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0530 21:31:30.294110 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0530 21:31:30.332085 2415330 cri.go:88] found id: "260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08"
	I0530 21:31:30.332110 2415330 cri.go:88] found id: ""
	I0530 21:31:30.332118 2415330 logs.go:284] 1 containers: [260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08]
	I0530 21:31:30.332176 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:31:30.337724 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0530 21:31:30.337807 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0530 21:31:30.399847 2415330 cri.go:88] found id: "f6a94f00bf998f07ddc6688b019b152e97017884c01064daef49d2f25cdd9232"
	I0530 21:31:30.399873 2415330 cri.go:88] found id: ""
	I0530 21:31:30.399880 2415330 logs.go:284] 1 containers: [f6a94f00bf998f07ddc6688b019b152e97017884c01064daef49d2f25cdd9232]
	I0530 21:31:30.399976 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:31:30.405390 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0530 21:31:30.405466 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0530 21:31:30.443342 2415330 cri.go:88] found id: ""
	I0530 21:31:30.443366 2415330 logs.go:284] 0 containers: []
	W0530 21:31:30.443374 2415330 logs.go:286] No container was found matching "coredns"
	I0530 21:31:30.443380 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0530 21:31:30.443437 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0530 21:31:30.483801 2415330 cri.go:88] found id: "7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:31:30.483820 2415330 cri.go:88] found id: ""
	I0530 21:31:30.483828 2415330 logs.go:284] 1 containers: [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f]
	I0530 21:31:30.483886 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:31:30.489342 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0530 21:31:30.489422 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0530 21:31:30.536099 2415330 cri.go:88] found id: ""
	I0530 21:31:30.536122 2415330 logs.go:284] 0 containers: []
	W0530 21:31:30.536131 2415330 logs.go:286] No container was found matching "kube-proxy"
	I0530 21:31:30.536137 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0530 21:31:30.536193 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0530 21:31:30.583006 2415330 cri.go:88] found id: "1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e"
	I0530 21:31:30.583028 2415330 cri.go:88] found id: ""
	I0530 21:31:30.583035 2415330 logs.go:284] 1 containers: [1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e]
	I0530 21:31:30.583090 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:31:30.589555 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0530 21:31:30.589625 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0530 21:31:30.625226 2415330 cri.go:88] found id: ""
	I0530 21:31:30.625247 2415330 logs.go:284] 0 containers: []
	W0530 21:31:30.625256 2415330 logs.go:286] No container was found matching "kindnet"
	I0530 21:31:30.625262 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0530 21:31:30.625395 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0530 21:31:30.658589 2415330 cri.go:88] found id: ""
	I0530 21:31:30.658613 2415330 logs.go:284] 0 containers: []
	W0530 21:31:30.658621 2415330 logs.go:286] No container was found matching "storage-provisioner"
	I0530 21:31:30.658636 2415330 logs.go:123] Gathering logs for kubelet ...
	I0530 21:31:30.658648 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0530 21:31:30.761375 2415330 logs.go:123] Gathering logs for dmesg ...
	I0530 21:31:30.761438 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0530 21:31:30.793953 2415330 logs.go:123] Gathering logs for describe nodes ...
	I0530 21:31:30.793985 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0530 21:31:30.922515 2415330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0530 21:31:30.922557 2415330 logs.go:123] Gathering logs for kube-apiserver [260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08] ...
	I0530 21:31:30.922571 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08"
	I0530 21:31:30.966145 2415330 logs.go:123] Gathering logs for kube-scheduler [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f] ...
	I0530 21:31:30.966175 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:31:31.038064 2415330 logs.go:123] Gathering logs for etcd [f6a94f00bf998f07ddc6688b019b152e97017884c01064daef49d2f25cdd9232] ...
	I0530 21:31:31.038097 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6a94f00bf998f07ddc6688b019b152e97017884c01064daef49d2f25cdd9232"
	I0530 21:31:31.106235 2415330 logs.go:123] Gathering logs for kube-controller-manager [1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e] ...
	I0530 21:31:31.106262 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e"
	I0530 21:31:31.183968 2415330 logs.go:123] Gathering logs for containerd ...
	I0530 21:31:31.184007 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0530 21:31:31.272233 2415330 logs.go:123] Gathering logs for container status ...
	I0530 21:31:31.272301 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0530 21:31:33.831267 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:31:33.831672 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:31:33.831727 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0530 21:31:33.831784 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0530 21:31:33.877204 2415330 cri.go:88] found id: "260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08"
	I0530 21:31:33.877226 2415330 cri.go:88] found id: ""
	I0530 21:31:33.877233 2415330 logs.go:284] 1 containers: [260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08]
	I0530 21:31:33.877288 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:31:33.884414 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0530 21:31:33.884484 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0530 21:31:33.929569 2415330 cri.go:88] found id: "f6a94f00bf998f07ddc6688b019b152e97017884c01064daef49d2f25cdd9232"
	I0530 21:31:33.929594 2415330 cri.go:88] found id: ""
	I0530 21:31:33.929602 2415330 logs.go:284] 1 containers: [f6a94f00bf998f07ddc6688b019b152e97017884c01064daef49d2f25cdd9232]
	I0530 21:31:33.929669 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:31:33.935955 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0530 21:31:33.936050 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0530 21:31:33.975172 2415330 cri.go:88] found id: ""
	I0530 21:31:33.975191 2415330 logs.go:284] 0 containers: []
	W0530 21:31:33.975199 2415330 logs.go:286] No container was found matching "coredns"
	I0530 21:31:33.975204 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0530 21:31:33.975263 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0530 21:31:34.021725 2415330 cri.go:88] found id: "7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:31:34.021747 2415330 cri.go:88] found id: ""
	I0530 21:31:34.021755 2415330 logs.go:284] 1 containers: [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f]
	I0530 21:31:34.021814 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:31:34.027338 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0530 21:31:34.027405 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0530 21:31:34.078027 2415330 cri.go:88] found id: ""
	I0530 21:31:34.078048 2415330 logs.go:284] 0 containers: []
	W0530 21:31:34.078056 2415330 logs.go:286] No container was found matching "kube-proxy"
	I0530 21:31:34.078071 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0530 21:31:34.078147 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0530 21:31:34.133453 2415330 cri.go:88] found id: "1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e"
	I0530 21:31:34.133478 2415330 cri.go:88] found id: ""
	I0530 21:31:34.133486 2415330 logs.go:284] 1 containers: [1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e]
	I0530 21:31:34.133550 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:31:34.139307 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0530 21:31:34.139375 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0530 21:31:34.185042 2415330 cri.go:88] found id: ""
	I0530 21:31:34.185106 2415330 logs.go:284] 0 containers: []
	W0530 21:31:34.185129 2415330 logs.go:286] No container was found matching "kindnet"
	I0530 21:31:34.185156 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0530 21:31:34.185253 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0530 21:31:34.221373 2415330 cri.go:88] found id: ""
	I0530 21:31:34.221403 2415330 logs.go:284] 0 containers: []
	W0530 21:31:34.221411 2415330 logs.go:286] No container was found matching "storage-provisioner"
	I0530 21:31:34.221425 2415330 logs.go:123] Gathering logs for kube-apiserver [260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08] ...
	I0530 21:31:34.221440 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08"
	I0530 21:31:34.262499 2415330 logs.go:123] Gathering logs for kube-scheduler [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f] ...
	I0530 21:31:34.262653 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:31:34.367846 2415330 logs.go:123] Gathering logs for container status ...
	I0530 21:31:34.367883 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0530 21:31:34.422155 2415330 logs.go:123] Gathering logs for etcd [f6a94f00bf998f07ddc6688b019b152e97017884c01064daef49d2f25cdd9232] ...
	I0530 21:31:34.422180 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6a94f00bf998f07ddc6688b019b152e97017884c01064daef49d2f25cdd9232"
	I0530 21:31:34.464394 2415330 logs.go:123] Gathering logs for kube-controller-manager [1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e] ...
	I0530 21:31:34.464418 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e"
	I0530 21:31:34.529093 2415330 logs.go:123] Gathering logs for containerd ...
	I0530 21:31:34.529130 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0530 21:31:34.597846 2415330 logs.go:123] Gathering logs for kubelet ...
	I0530 21:31:34.597880 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0530 21:31:34.682276 2415330 logs.go:123] Gathering logs for dmesg ...
	I0530 21:31:34.682311 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0530 21:31:34.704867 2415330 logs.go:123] Gathering logs for describe nodes ...
	I0530 21:31:34.704902 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0530 21:31:34.789569 2415330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0530 21:31:37.289800 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:31:42.290586 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0530 21:31:42.290643 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0530 21:31:42.290713 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0530 21:31:42.344556 2415330 cri.go:88] found id: "4cfe2a3ff23da4b500092e92cfab3263f91256714704038938e1344b6e54eac1"
	I0530 21:31:42.344579 2415330 cri.go:88] found id: "260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08"
	I0530 21:31:42.344585 2415330 cri.go:88] found id: ""
	I0530 21:31:42.344592 2415330 logs.go:284] 2 containers: [4cfe2a3ff23da4b500092e92cfab3263f91256714704038938e1344b6e54eac1 260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08]
	I0530 21:31:42.344649 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:31:42.349989 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:31:42.355677 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0530 21:31:42.355746 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0530 21:31:42.389679 2415330 cri.go:88] found id: "f6a94f00bf998f07ddc6688b019b152e97017884c01064daef49d2f25cdd9232"
	I0530 21:31:42.389701 2415330 cri.go:88] found id: ""
	I0530 21:31:42.389709 2415330 logs.go:284] 1 containers: [f6a94f00bf998f07ddc6688b019b152e97017884c01064daef49d2f25cdd9232]
	I0530 21:31:42.389769 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:31:42.394393 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0530 21:31:42.394462 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0530 21:31:42.430678 2415330 cri.go:88] found id: ""
	I0530 21:31:42.430706 2415330 logs.go:284] 0 containers: []
	W0530 21:31:42.430715 2415330 logs.go:286] No container was found matching "coredns"
	I0530 21:31:42.430723 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0530 21:31:42.430788 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0530 21:31:42.465966 2415330 cri.go:88] found id: "7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:31:42.465987 2415330 cri.go:88] found id: ""
	I0530 21:31:42.465995 2415330 logs.go:284] 1 containers: [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f]
	I0530 21:31:42.466060 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:31:42.470920 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0530 21:31:42.470990 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0530 21:31:42.501975 2415330 cri.go:88] found id: ""
	I0530 21:31:42.501996 2415330 logs.go:284] 0 containers: []
	W0530 21:31:42.502004 2415330 logs.go:286] No container was found matching "kube-proxy"
	I0530 21:31:42.502010 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0530 21:31:42.502126 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0530 21:31:42.536766 2415330 cri.go:88] found id: "31d44d7794eaa86ddace326b994080c5e4273ec308edcd1badd87652628f03af"
	I0530 21:31:42.536792 2415330 cri.go:88] found id: "1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e"
	I0530 21:31:42.536798 2415330 cri.go:88] found id: ""
	I0530 21:31:42.536805 2415330 logs.go:284] 2 containers: [31d44d7794eaa86ddace326b994080c5e4273ec308edcd1badd87652628f03af 1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e]
	I0530 21:31:42.536891 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:31:42.542380 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:31:42.546839 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0530 21:31:42.546932 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0530 21:31:42.580756 2415330 cri.go:88] found id: ""
	I0530 21:31:42.580812 2415330 logs.go:284] 0 containers: []
	W0530 21:31:42.580822 2415330 logs.go:286] No container was found matching "kindnet"
	I0530 21:31:42.580828 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0530 21:31:42.581004 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0530 21:31:42.617484 2415330 cri.go:88] found id: ""
	I0530 21:31:42.617505 2415330 logs.go:284] 0 containers: []
	W0530 21:31:42.617514 2415330 logs.go:286] No container was found matching "storage-provisioner"
	I0530 21:31:42.617525 2415330 logs.go:123] Gathering logs for kubelet ...
	I0530 21:31:42.617537 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0530 21:31:42.707382 2415330 logs.go:123] Gathering logs for dmesg ...
	I0530 21:31:42.707461 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0530 21:31:42.738584 2415330 logs.go:123] Gathering logs for describe nodes ...
	I0530 21:31:42.738678 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0530 21:31:52.854753 2415330 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.116039092s)
	W0530 21:31:52.854835 2415330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I0530 21:31:52.854849 2415330 logs.go:123] Gathering logs for kube-apiserver [4cfe2a3ff23da4b500092e92cfab3263f91256714704038938e1344b6e54eac1] ...
	I0530 21:31:52.854861 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cfe2a3ff23da4b500092e92cfab3263f91256714704038938e1344b6e54eac1"
	I0530 21:31:52.890444 2415330 logs.go:123] Gathering logs for kube-apiserver [260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08] ...
	I0530 21:31:52.890475 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08"
	I0530 21:31:52.926327 2415330 logs.go:123] Gathering logs for container status ...
	I0530 21:31:52.926356 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0530 21:31:52.962744 2415330 logs.go:123] Gathering logs for etcd [f6a94f00bf998f07ddc6688b019b152e97017884c01064daef49d2f25cdd9232] ...
	I0530 21:31:52.962773 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6a94f00bf998f07ddc6688b019b152e97017884c01064daef49d2f25cdd9232"
	I0530 21:31:52.995710 2415330 logs.go:123] Gathering logs for kube-scheduler [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f] ...
	I0530 21:31:52.995780 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:31:53.069642 2415330 logs.go:123] Gathering logs for kube-controller-manager [31d44d7794eaa86ddace326b994080c5e4273ec308edcd1badd87652628f03af] ...
	I0530 21:31:53.069678 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31d44d7794eaa86ddace326b994080c5e4273ec308edcd1badd87652628f03af"
	I0530 21:31:53.104390 2415330 logs.go:123] Gathering logs for kube-controller-manager [1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e] ...
	I0530 21:31:53.104421 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e"
	I0530 21:31:53.168494 2415330 logs.go:123] Gathering logs for containerd ...
	I0530 21:31:53.168526 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0530 21:31:55.741796 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:31:56.487171 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:54634->192.168.76.2:8443: read: connection reset by peer
	I0530 21:31:56.487232 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0530 21:31:56.487370 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0530 21:31:56.577444 2415330 cri.go:88] found id: "4cfe2a3ff23da4b500092e92cfab3263f91256714704038938e1344b6e54eac1"
	I0530 21:31:56.577507 2415330 cri.go:88] found id: "260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08"
	I0530 21:31:56.577525 2415330 cri.go:88] found id: ""
	I0530 21:31:56.577543 2415330 logs.go:284] 2 containers: [4cfe2a3ff23da4b500092e92cfab3263f91256714704038938e1344b6e54eac1 260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08]
	I0530 21:31:56.577614 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:31:56.602763 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:31:56.616936 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0530 21:31:56.617018 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0530 21:31:56.657338 2415330 cri.go:88] found id: "f6a94f00bf998f07ddc6688b019b152e97017884c01064daef49d2f25cdd9232"
	I0530 21:31:56.657361 2415330 cri.go:88] found id: ""
	I0530 21:31:56.657369 2415330 logs.go:284] 1 containers: [f6a94f00bf998f07ddc6688b019b152e97017884c01064daef49d2f25cdd9232]
	I0530 21:31:56.657432 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:31:56.662578 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0530 21:31:56.662651 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0530 21:31:56.712273 2415330 cri.go:88] found id: ""
	I0530 21:31:56.712299 2415330 logs.go:284] 0 containers: []
	W0530 21:31:56.712307 2415330 logs.go:286] No container was found matching "coredns"
	I0530 21:31:56.712314 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0530 21:31:56.712374 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0530 21:31:56.761378 2415330 cri.go:88] found id: "7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:31:56.761405 2415330 cri.go:88] found id: ""
	I0530 21:31:56.761413 2415330 logs.go:284] 1 containers: [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f]
	I0530 21:31:56.761471 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:31:56.766719 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0530 21:31:56.766802 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0530 21:31:56.807661 2415330 cri.go:88] found id: ""
	I0530 21:31:56.807680 2415330 logs.go:284] 0 containers: []
	W0530 21:31:56.807689 2415330 logs.go:286] No container was found matching "kube-proxy"
	I0530 21:31:56.807695 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0530 21:31:56.807754 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0530 21:31:56.856521 2415330 cri.go:88] found id: "31d44d7794eaa86ddace326b994080c5e4273ec308edcd1badd87652628f03af"
	I0530 21:31:56.856540 2415330 cri.go:88] found id: "1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e"
	I0530 21:31:56.856546 2415330 cri.go:88] found id: ""
	I0530 21:31:56.856552 2415330 logs.go:284] 2 containers: [31d44d7794eaa86ddace326b994080c5e4273ec308edcd1badd87652628f03af 1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e]
	I0530 21:31:56.856610 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:31:56.865052 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:31:56.870266 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0530 21:31:56.870336 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0530 21:31:56.910864 2415330 cri.go:88] found id: ""
	I0530 21:31:56.910927 2415330 logs.go:284] 0 containers: []
	W0530 21:31:56.910950 2415330 logs.go:286] No container was found matching "kindnet"
	I0530 21:31:56.910968 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0530 21:31:56.911057 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0530 21:31:56.996923 2415330 cri.go:88] found id: ""
	I0530 21:31:56.996943 2415330 logs.go:284] 0 containers: []
	W0530 21:31:56.996951 2415330 logs.go:286] No container was found matching "storage-provisioner"
	I0530 21:31:56.996961 2415330 logs.go:123] Gathering logs for dmesg ...
	I0530 21:31:56.996973 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0530 21:31:57.030369 2415330 logs.go:123] Gathering logs for kube-apiserver [4cfe2a3ff23da4b500092e92cfab3263f91256714704038938e1344b6e54eac1] ...
	I0530 21:31:57.030665 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cfe2a3ff23da4b500092e92cfab3263f91256714704038938e1344b6e54eac1"
	I0530 21:31:57.090302 2415330 logs.go:123] Gathering logs for kube-apiserver [260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08] ...
	I0530 21:31:57.090371 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08"
	W0530 21:31:57.131951 2415330 logs.go:130] failed kube-apiserver [260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08": Process exited with status 1
	stdout:
	
	stderr:
	E0530 21:31:57.126776    5914 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08\": not found" containerID="260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08"
	time="2023-05-30T21:31:57Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08\": not found"
	 output: 
	** stderr ** 
	E0530 21:31:57.126776    5914 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08\": not found" containerID="260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08"
	time="2023-05-30T21:31:57Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"260ff1bd9060d28c4f3088090dd4de03dcda604c235f8a94ccaac19d26d1ee08\": not found"
	
	** /stderr **
	I0530 21:31:57.131987 2415330 logs.go:123] Gathering logs for kube-controller-manager [1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e] ...
	I0530 21:31:57.132001 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1345eac7801b0a53b28ea19671b464275b7b2a6a063d107dba1b72733a6f429e"
	I0530 21:31:57.215929 2415330 logs.go:123] Gathering logs for container status ...
	I0530 21:31:57.215964 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0530 21:31:57.288963 2415330 logs.go:123] Gathering logs for containerd ...
	I0530 21:31:57.288999 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0530 21:31:57.368733 2415330 logs.go:123] Gathering logs for kubelet ...
	I0530 21:31:57.368770 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0530 21:31:57.443863 2415330 logs.go:123] Gathering logs for describe nodes ...
	I0530 21:31:57.443903 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0530 21:31:57.541817 2415330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0530 21:31:57.541842 2415330 logs.go:123] Gathering logs for etcd [f6a94f00bf998f07ddc6688b019b152e97017884c01064daef49d2f25cdd9232] ...
	I0530 21:31:57.541855 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6a94f00bf998f07ddc6688b019b152e97017884c01064daef49d2f25cdd9232"
	I0530 21:31:57.585069 2415330 logs.go:123] Gathering logs for kube-scheduler [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f] ...
	I0530 21:31:57.585100 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:31:57.649958 2415330 logs.go:123] Gathering logs for kube-controller-manager [31d44d7794eaa86ddace326b994080c5e4273ec308edcd1badd87652628f03af] ...
	I0530 21:31:57.649997 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31d44d7794eaa86ddace326b994080c5e4273ec308edcd1badd87652628f03af"
	I0530 21:32:00.215751 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:32:00.216150 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:32:00.216208 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0530 21:32:00.216277 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0530 21:32:00.281377 2415330 cri.go:88] found id: "4cfe2a3ff23da4b500092e92cfab3263f91256714704038938e1344b6e54eac1"
	I0530 21:32:00.281403 2415330 cri.go:88] found id: ""
	I0530 21:32:00.281411 2415330 logs.go:284] 1 containers: [4cfe2a3ff23da4b500092e92cfab3263f91256714704038938e1344b6e54eac1]
	I0530 21:32:00.281482 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:32:00.289802 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0530 21:32:00.289888 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0530 21:32:00.352146 2415330 cri.go:88] found id: "f6a94f00bf998f07ddc6688b019b152e97017884c01064daef49d2f25cdd9232"
	I0530 21:32:00.352171 2415330 cri.go:88] found id: ""
	I0530 21:32:00.352179 2415330 logs.go:284] 1 containers: [f6a94f00bf998f07ddc6688b019b152e97017884c01064daef49d2f25cdd9232]
	I0530 21:32:00.352241 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:32:00.361547 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0530 21:32:00.361627 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0530 21:32:00.413998 2415330 cri.go:88] found id: ""
	I0530 21:32:00.414024 2415330 logs.go:284] 0 containers: []
	W0530 21:32:00.414033 2415330 logs.go:286] No container was found matching "coredns"
	I0530 21:32:00.414039 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0530 21:32:00.414130 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0530 21:32:00.475521 2415330 cri.go:88] found id: "7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:32:00.475547 2415330 cri.go:88] found id: ""
	I0530 21:32:00.475556 2415330 logs.go:284] 1 containers: [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f]
	I0530 21:32:00.475612 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:32:00.480415 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0530 21:32:00.480485 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0530 21:32:00.535487 2415330 cri.go:88] found id: ""
	I0530 21:32:00.535514 2415330 logs.go:284] 0 containers: []
	W0530 21:32:00.535524 2415330 logs.go:286] No container was found matching "kube-proxy"
	I0530 21:32:00.535530 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0530 21:32:00.535590 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0530 21:32:00.596676 2415330 cri.go:88] found id: "31d44d7794eaa86ddace326b994080c5e4273ec308edcd1badd87652628f03af"
	I0530 21:32:00.596702 2415330 cri.go:88] found id: ""
	I0530 21:32:00.596710 2415330 logs.go:284] 1 containers: [31d44d7794eaa86ddace326b994080c5e4273ec308edcd1badd87652628f03af]
	I0530 21:32:00.596779 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:32:00.607863 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0530 21:32:00.607951 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0530 21:32:00.651885 2415330 cri.go:88] found id: ""
	I0530 21:32:00.651911 2415330 logs.go:284] 0 containers: []
	W0530 21:32:00.651936 2415330 logs.go:286] No container was found matching "kindnet"
	I0530 21:32:00.651944 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0530 21:32:00.652021 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0530 21:32:00.693205 2415330 cri.go:88] found id: ""
	I0530 21:32:00.693250 2415330 logs.go:284] 0 containers: []
	W0530 21:32:00.693264 2415330 logs.go:286] No container was found matching "storage-provisioner"
	I0530 21:32:00.693285 2415330 logs.go:123] Gathering logs for describe nodes ...
	I0530 21:32:00.693350 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0530 21:32:00.843153 2415330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0530 21:32:00.843171 2415330 logs.go:123] Gathering logs for kube-apiserver [4cfe2a3ff23da4b500092e92cfab3263f91256714704038938e1344b6e54eac1] ...
	I0530 21:32:00.843186 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cfe2a3ff23da4b500092e92cfab3263f91256714704038938e1344b6e54eac1"
	I0530 21:32:00.884273 2415330 logs.go:123] Gathering logs for container status ...
	I0530 21:32:00.884423 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0530 21:32:00.958084 2415330 logs.go:123] Gathering logs for kubelet ...
	I0530 21:32:00.958109 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0530 21:32:01.058081 2415330 logs.go:123] Gathering logs for dmesg ...
	I0530 21:32:01.058173 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0530 21:32:01.091925 2415330 logs.go:123] Gathering logs for etcd [f6a94f00bf998f07ddc6688b019b152e97017884c01064daef49d2f25cdd9232] ...
	I0530 21:32:01.091956 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6a94f00bf998f07ddc6688b019b152e97017884c01064daef49d2f25cdd9232"
	I0530 21:32:01.148350 2415330 logs.go:123] Gathering logs for kube-scheduler [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f] ...
	I0530 21:32:01.148423 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:32:01.288527 2415330 logs.go:123] Gathering logs for kube-controller-manager [31d44d7794eaa86ddace326b994080c5e4273ec308edcd1badd87652628f03af] ...
	I0530 21:32:01.288569 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31d44d7794eaa86ddace326b994080c5e4273ec308edcd1badd87652628f03af"
	I0530 21:32:01.400896 2415330 logs.go:123] Gathering logs for containerd ...
	I0530 21:32:01.400942 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0530 21:32:04.022026 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:32:04.022472 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:32:04.022541 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0530 21:32:04.022630 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0530 21:32:04.055713 2415330 cri.go:88] found id: "4cfe2a3ff23da4b500092e92cfab3263f91256714704038938e1344b6e54eac1"
	I0530 21:32:04.055730 2415330 cri.go:88] found id: ""
	I0530 21:32:04.055737 2415330 logs.go:284] 1 containers: [4cfe2a3ff23da4b500092e92cfab3263f91256714704038938e1344b6e54eac1]
	I0530 21:32:04.055792 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:32:04.061608 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0530 21:32:04.061724 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0530 21:32:04.100542 2415330 cri.go:88] found id: "f6a94f00bf998f07ddc6688b019b152e97017884c01064daef49d2f25cdd9232"
	I0530 21:32:04.100609 2415330 cri.go:88] found id: ""
	I0530 21:32:04.100647 2415330 logs.go:284] 1 containers: [f6a94f00bf998f07ddc6688b019b152e97017884c01064daef49d2f25cdd9232]
	I0530 21:32:04.100740 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:32:04.105728 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0530 21:32:04.105850 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0530 21:32:04.157370 2415330 cri.go:88] found id: ""
	I0530 21:32:04.157441 2415330 logs.go:284] 0 containers: []
	W0530 21:32:04.157461 2415330 logs.go:286] No container was found matching "coredns"
	I0530 21:32:04.157477 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0530 21:32:04.157568 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0530 21:32:04.205611 2415330 cri.go:88] found id: "7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:32:04.205634 2415330 cri.go:88] found id: ""
	I0530 21:32:04.205642 2415330 logs.go:284] 1 containers: [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f]
	I0530 21:32:04.205701 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:32:04.210627 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0530 21:32:04.210700 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0530 21:32:04.255233 2415330 cri.go:88] found id: ""
	I0530 21:32:04.255259 2415330 logs.go:284] 0 containers: []
	W0530 21:32:04.255268 2415330 logs.go:286] No container was found matching "kube-proxy"
	I0530 21:32:04.255275 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0530 21:32:04.255340 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0530 21:32:04.291458 2415330 cri.go:88] found id: "31d44d7794eaa86ddace326b994080c5e4273ec308edcd1badd87652628f03af"
	I0530 21:32:04.291539 2415330 cri.go:88] found id: ""
	I0530 21:32:04.291561 2415330 logs.go:284] 1 containers: [31d44d7794eaa86ddace326b994080c5e4273ec308edcd1badd87652628f03af]
	I0530 21:32:04.291657 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:32:04.298340 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0530 21:32:04.298463 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0530 21:32:04.332252 2415330 cri.go:88] found id: ""
	I0530 21:32:04.332323 2415330 logs.go:284] 0 containers: []
	W0530 21:32:04.332345 2415330 logs.go:286] No container was found matching "kindnet"
	I0530 21:32:04.332362 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0530 21:32:04.332451 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0530 21:32:04.374857 2415330 cri.go:88] found id: ""
	I0530 21:32:04.374881 2415330 logs.go:284] 0 containers: []
	W0530 21:32:04.374890 2415330 logs.go:286] No container was found matching "storage-provisioner"
	I0530 21:32:04.374919 2415330 logs.go:123] Gathering logs for describe nodes ...
	I0530 21:32:04.374937 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0530 21:32:04.507278 2415330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0530 21:32:04.507349 2415330 logs.go:123] Gathering logs for etcd [f6a94f00bf998f07ddc6688b019b152e97017884c01064daef49d2f25cdd9232] ...
	I0530 21:32:04.507375 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6a94f00bf998f07ddc6688b019b152e97017884c01064daef49d2f25cdd9232"
	I0530 21:32:04.545468 2415330 logs.go:123] Gathering logs for kube-scheduler [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f] ...
	I0530 21:32:04.545550 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:32:04.638489 2415330 logs.go:123] Gathering logs for kube-controller-manager [31d44d7794eaa86ddace326b994080c5e4273ec308edcd1badd87652628f03af] ...
	I0530 21:32:04.638568 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31d44d7794eaa86ddace326b994080c5e4273ec308edcd1badd87652628f03af"
	I0530 21:32:04.731026 2415330 logs.go:123] Gathering logs for container status ...
	I0530 21:32:04.731108 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0530 21:32:04.816814 2415330 logs.go:123] Gathering logs for kubelet ...
	I0530 21:32:04.816839 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0530 21:32:04.933557 2415330 logs.go:123] Gathering logs for dmesg ...
	I0530 21:32:04.933633 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0530 21:32:04.961503 2415330 logs.go:123] Gathering logs for kube-apiserver [4cfe2a3ff23da4b500092e92cfab3263f91256714704038938e1344b6e54eac1] ...
	I0530 21:32:04.961572 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cfe2a3ff23da4b500092e92cfab3263f91256714704038938e1344b6e54eac1"
	I0530 21:32:05.007666 2415330 logs.go:123] Gathering logs for containerd ...
	I0530 21:32:05.007958 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0530 21:32:07.599049 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:32:07.599486 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:32:07.599544 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0530 21:32:07.599621 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0530 21:32:07.662836 2415330 cri.go:88] found id: "4cfe2a3ff23da4b500092e92cfab3263f91256714704038938e1344b6e54eac1"
	I0530 21:32:07.662862 2415330 cri.go:88] found id: ""
	I0530 21:32:07.662884 2415330 logs.go:284] 1 containers: [4cfe2a3ff23da4b500092e92cfab3263f91256714704038938e1344b6e54eac1]
	I0530 21:32:07.662943 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:32:07.667645 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0530 21:32:07.667728 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0530 21:32:07.711137 2415330 cri.go:88] found id: "f6a94f00bf998f07ddc6688b019b152e97017884c01064daef49d2f25cdd9232"
	I0530 21:32:07.711160 2415330 cri.go:88] found id: ""
	I0530 21:32:07.711168 2415330 logs.go:284] 1 containers: [f6a94f00bf998f07ddc6688b019b152e97017884c01064daef49d2f25cdd9232]
	I0530 21:32:07.711222 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:32:07.716051 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0530 21:32:07.716126 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0530 21:32:07.772486 2415330 cri.go:88] found id: ""
	I0530 21:32:07.772509 2415330 logs.go:284] 0 containers: []
	W0530 21:32:07.772518 2415330 logs.go:286] No container was found matching "coredns"
	I0530 21:32:07.772524 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0530 21:32:07.772583 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0530 21:32:07.807571 2415330 cri.go:88] found id: "7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:32:07.807590 2415330 cri.go:88] found id: ""
	I0530 21:32:07.807597 2415330 logs.go:284] 1 containers: [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f]
	I0530 21:32:07.807650 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:32:07.812976 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0530 21:32:07.813057 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0530 21:32:07.859669 2415330 cri.go:88] found id: ""
	I0530 21:32:07.859757 2415330 logs.go:284] 0 containers: []
	W0530 21:32:07.859769 2415330 logs.go:286] No container was found matching "kube-proxy"
	I0530 21:32:07.859776 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0530 21:32:07.859854 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0530 21:32:07.924541 2415330 cri.go:88] found id: "31d44d7794eaa86ddace326b994080c5e4273ec308edcd1badd87652628f03af"
	I0530 21:32:07.924561 2415330 cri.go:88] found id: ""
	I0530 21:32:07.924569 2415330 logs.go:284] 1 containers: [31d44d7794eaa86ddace326b994080c5e4273ec308edcd1badd87652628f03af]
	I0530 21:32:07.924631 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:32:07.936036 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0530 21:32:07.936119 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0530 21:32:07.979074 2415330 cri.go:88] found id: ""
	I0530 21:32:07.979097 2415330 logs.go:284] 0 containers: []
	W0530 21:32:07.979106 2415330 logs.go:286] No container was found matching "kindnet"
	I0530 21:32:07.979112 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0530 21:32:07.979174 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0530 21:32:08.015832 2415330 cri.go:88] found id: ""
	I0530 21:32:08.015857 2415330 logs.go:284] 0 containers: []
	W0530 21:32:08.015866 2415330 logs.go:286] No container was found matching "storage-provisioner"
	I0530 21:32:08.015881 2415330 logs.go:123] Gathering logs for dmesg ...
	I0530 21:32:08.015896 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0530 21:32:08.038194 2415330 logs.go:123] Gathering logs for kube-apiserver [4cfe2a3ff23da4b500092e92cfab3263f91256714704038938e1344b6e54eac1] ...
	I0530 21:32:08.038229 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cfe2a3ff23da4b500092e92cfab3263f91256714704038938e1344b6e54eac1"
	I0530 21:32:08.086000 2415330 logs.go:123] Gathering logs for kube-scheduler [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f] ...
	I0530 21:32:08.086078 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:32:08.182084 2415330 logs.go:123] Gathering logs for kube-controller-manager [31d44d7794eaa86ddace326b994080c5e4273ec308edcd1badd87652628f03af] ...
	I0530 21:32:08.182164 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31d44d7794eaa86ddace326b994080c5e4273ec308edcd1badd87652628f03af"
	I0530 21:32:08.282158 2415330 logs.go:123] Gathering logs for containerd ...
	I0530 21:32:08.282230 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0530 21:32:08.389292 2415330 logs.go:123] Gathering logs for kubelet ...
	I0530 21:32:08.389425 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0530 21:32:08.480456 2415330 logs.go:123] Gathering logs for etcd [f6a94f00bf998f07ddc6688b019b152e97017884c01064daef49d2f25cdd9232] ...
	I0530 21:32:08.480491 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6a94f00bf998f07ddc6688b019b152e97017884c01064daef49d2f25cdd9232"
	I0530 21:32:08.522401 2415330 logs.go:123] Gathering logs for container status ...
	I0530 21:32:08.522424 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0530 21:32:08.567418 2415330 logs.go:123] Gathering logs for describe nodes ...
	I0530 21:32:08.567442 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0530 21:32:08.677165 2415330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0530 21:32:11.177380 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:32:11.177834 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:32:11.177888 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0530 21:32:11.177953 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0530 21:32:11.219206 2415330 cri.go:88] found id: "4cfe2a3ff23da4b500092e92cfab3263f91256714704038938e1344b6e54eac1"
	I0530 21:32:11.219255 2415330 cri.go:88] found id: ""
	I0530 21:32:11.219265 2415330 logs.go:284] 1 containers: [4cfe2a3ff23da4b500092e92cfab3263f91256714704038938e1344b6e54eac1]
	I0530 21:32:11.219332 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:32:11.224294 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0530 21:32:11.224402 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0530 21:32:11.258333 2415330 cri.go:88] found id: "f6a94f00bf998f07ddc6688b019b152e97017884c01064daef49d2f25cdd9232"
	I0530 21:32:11.258359 2415330 cri.go:88] found id: ""
	I0530 21:32:11.258367 2415330 logs.go:284] 1 containers: [f6a94f00bf998f07ddc6688b019b152e97017884c01064daef49d2f25cdd9232]
	I0530 21:32:11.258423 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:32:11.263193 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0530 21:32:11.263266 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0530 21:32:11.303670 2415330 cri.go:88] found id: ""
	I0530 21:32:11.303695 2415330 logs.go:284] 0 containers: []
	W0530 21:32:11.303704 2415330 logs.go:286] No container was found matching "coredns"
	I0530 21:32:11.303711 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0530 21:32:11.303770 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0530 21:32:11.344122 2415330 cri.go:88] found id: "7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:32:11.344146 2415330 cri.go:88] found id: ""
	I0530 21:32:11.344154 2415330 logs.go:284] 1 containers: [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f]
	I0530 21:32:11.344213 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:32:11.350541 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0530 21:32:11.350614 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0530 21:32:11.390797 2415330 cri.go:88] found id: ""
	I0530 21:32:11.390818 2415330 logs.go:284] 0 containers: []
	W0530 21:32:11.390826 2415330 logs.go:286] No container was found matching "kube-proxy"
	I0530 21:32:11.390833 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0530 21:32:11.390898 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0530 21:32:11.429109 2415330 cri.go:88] found id: "31d44d7794eaa86ddace326b994080c5e4273ec308edcd1badd87652628f03af"
	I0530 21:32:11.429129 2415330 cri.go:88] found id: ""
	I0530 21:32:11.429137 2415330 logs.go:284] 1 containers: [31d44d7794eaa86ddace326b994080c5e4273ec308edcd1badd87652628f03af]
	I0530 21:32:11.429211 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:32:11.434676 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0530 21:32:11.434749 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0530 21:32:11.472784 2415330 cri.go:88] found id: ""
	I0530 21:32:11.472807 2415330 logs.go:284] 0 containers: []
	W0530 21:32:11.472815 2415330 logs.go:286] No container was found matching "kindnet"
	I0530 21:32:11.472821 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0530 21:32:11.472881 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0530 21:32:11.507563 2415330 cri.go:88] found id: ""
	I0530 21:32:11.507583 2415330 logs.go:284] 0 containers: []
	W0530 21:32:11.507591 2415330 logs.go:286] No container was found matching "storage-provisioner"
	I0530 21:32:11.507607 2415330 logs.go:123] Gathering logs for kube-scheduler [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f] ...
	I0530 21:32:11.507622 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:32:11.597648 2415330 logs.go:123] Gathering logs for containerd ...
	I0530 21:32:11.597685 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0530 21:32:11.698260 2415330 logs.go:123] Gathering logs for dmesg ...
	I0530 21:32:11.698299 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0530 21:32:11.722995 2415330 logs.go:123] Gathering logs for kube-apiserver [4cfe2a3ff23da4b500092e92cfab3263f91256714704038938e1344b6e54eac1] ...
	I0530 21:32:11.723024 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cfe2a3ff23da4b500092e92cfab3263f91256714704038938e1344b6e54eac1"
	I0530 21:32:11.768124 2415330 logs.go:123] Gathering logs for etcd [f6a94f00bf998f07ddc6688b019b152e97017884c01064daef49d2f25cdd9232] ...
	I0530 21:32:11.768264 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6a94f00bf998f07ddc6688b019b152e97017884c01064daef49d2f25cdd9232"
	I0530 21:32:11.809916 2415330 logs.go:123] Gathering logs for kube-controller-manager [31d44d7794eaa86ddace326b994080c5e4273ec308edcd1badd87652628f03af] ...
	I0530 21:32:11.809942 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31d44d7794eaa86ddace326b994080c5e4273ec308edcd1badd87652628f03af"
	I0530 21:32:11.867875 2415330 logs.go:123] Gathering logs for container status ...
	I0530 21:32:11.867946 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0530 21:32:11.924110 2415330 logs.go:123] Gathering logs for kubelet ...
	I0530 21:32:11.924137 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0530 21:32:12.028702 2415330 logs.go:123] Gathering logs for describe nodes ...
	I0530 21:32:12.028778 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0530 21:32:12.138418 2415330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0530 21:32:14.638776 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:32:14.639168 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:32:14.639212 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0530 21:32:14.639273 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0530 21:32:14.683509 2415330 cri.go:88] found id: "4cfe2a3ff23da4b500092e92cfab3263f91256714704038938e1344b6e54eac1"
	I0530 21:32:14.683534 2415330 cri.go:88] found id: ""
	I0530 21:32:14.683541 2415330 logs.go:284] 1 containers: [4cfe2a3ff23da4b500092e92cfab3263f91256714704038938e1344b6e54eac1]
	I0530 21:32:14.683597 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:32:14.688617 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0530 21:32:14.688689 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0530 21:32:14.730319 2415330 cri.go:88] found id: "f6a94f00bf998f07ddc6688b019b152e97017884c01064daef49d2f25cdd9232"
	I0530 21:32:14.730394 2415330 cri.go:88] found id: ""
	I0530 21:32:14.730403 2415330 logs.go:284] 1 containers: [f6a94f00bf998f07ddc6688b019b152e97017884c01064daef49d2f25cdd9232]
	I0530 21:32:14.730461 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:32:14.735789 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0530 21:32:14.735877 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0530 21:32:14.776524 2415330 cri.go:88] found id: ""
	I0530 21:32:14.776546 2415330 logs.go:284] 0 containers: []
	W0530 21:32:14.776554 2415330 logs.go:286] No container was found matching "coredns"
	I0530 21:32:14.776560 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0530 21:32:14.776621 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0530 21:32:14.817882 2415330 cri.go:88] found id: "7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:32:14.817901 2415330 cri.go:88] found id: ""
	I0530 21:32:14.817908 2415330 logs.go:284] 1 containers: [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f]
	I0530 21:32:14.817963 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:32:14.823697 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0530 21:32:14.823769 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0530 21:32:14.855151 2415330 cri.go:88] found id: ""
	I0530 21:32:14.855171 2415330 logs.go:284] 0 containers: []
	W0530 21:32:14.855179 2415330 logs.go:286] No container was found matching "kube-proxy"
	I0530 21:32:14.855186 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0530 21:32:14.855247 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0530 21:32:14.890508 2415330 cri.go:88] found id: "31d44d7794eaa86ddace326b994080c5e4273ec308edcd1badd87652628f03af"
	I0530 21:32:14.890527 2415330 cri.go:88] found id: ""
	I0530 21:32:14.890534 2415330 logs.go:284] 1 containers: [31d44d7794eaa86ddace326b994080c5e4273ec308edcd1badd87652628f03af]
	I0530 21:32:14.890598 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:32:14.899429 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0530 21:32:14.899503 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0530 21:32:14.934875 2415330 cri.go:88] found id: ""
	I0530 21:32:14.934896 2415330 logs.go:284] 0 containers: []
	W0530 21:32:14.934904 2415330 logs.go:286] No container was found matching "kindnet"
	I0530 21:32:14.934910 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0530 21:32:14.934989 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0530 21:32:14.973778 2415330 cri.go:88] found id: ""
	I0530 21:32:14.973797 2415330 logs.go:284] 0 containers: []
	W0530 21:32:14.973805 2415330 logs.go:286] No container was found matching "storage-provisioner"
	I0530 21:32:14.973818 2415330 logs.go:123] Gathering logs for container status ...
	I0530 21:32:14.973830 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0530 21:32:15.022477 2415330 logs.go:123] Gathering logs for kube-apiserver [4cfe2a3ff23da4b500092e92cfab3263f91256714704038938e1344b6e54eac1] ...
	I0530 21:32:15.022509 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cfe2a3ff23da4b500092e92cfab3263f91256714704038938e1344b6e54eac1"
	I0530 21:32:15.063116 2415330 logs.go:123] Gathering logs for containerd ...
	I0530 21:32:15.063203 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0530 21:32:15.143777 2415330 logs.go:123] Gathering logs for describe nodes ...
	I0530 21:32:15.143815 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0530 21:32:15.237328 2415330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0530 21:32:15.237347 2415330 logs.go:123] Gathering logs for etcd [f6a94f00bf998f07ddc6688b019b152e97017884c01064daef49d2f25cdd9232] ...
	I0530 21:32:15.237359 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6a94f00bf998f07ddc6688b019b152e97017884c01064daef49d2f25cdd9232"
	I0530 21:32:15.280459 2415330 logs.go:123] Gathering logs for kube-scheduler [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f] ...
	I0530 21:32:15.280492 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:32:15.356318 2415330 logs.go:123] Gathering logs for kube-controller-manager [31d44d7794eaa86ddace326b994080c5e4273ec308edcd1badd87652628f03af] ...
	I0530 21:32:15.356353 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31d44d7794eaa86ddace326b994080c5e4273ec308edcd1badd87652628f03af"
	I0530 21:32:15.419310 2415330 logs.go:123] Gathering logs for kubelet ...
	I0530 21:32:15.419345 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0530 21:32:15.509012 2415330 logs.go:123] Gathering logs for dmesg ...
	I0530 21:32:15.509054 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0530 21:32:18.040850 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:32:18.041258 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:32:18.041322 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0530 21:32:18.041383 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0530 21:32:18.078633 2415330 cri.go:88] found id: "4cfe2a3ff23da4b500092e92cfab3263f91256714704038938e1344b6e54eac1"
	I0530 21:32:18.078654 2415330 cri.go:88] found id: ""
	I0530 21:32:18.078662 2415330 logs.go:284] 1 containers: [4cfe2a3ff23da4b500092e92cfab3263f91256714704038938e1344b6e54eac1]
	I0530 21:32:18.078722 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:32:18.084664 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0530 21:32:18.084752 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0530 21:32:18.127667 2415330 cri.go:88] found id: "f6a94f00bf998f07ddc6688b019b152e97017884c01064daef49d2f25cdd9232"
	I0530 21:32:18.127688 2415330 cri.go:88] found id: ""
	I0530 21:32:18.127696 2415330 logs.go:284] 1 containers: [f6a94f00bf998f07ddc6688b019b152e97017884c01064daef49d2f25cdd9232]
	I0530 21:32:18.127755 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:32:18.132908 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0530 21:32:18.132979 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0530 21:32:18.170238 2415330 cri.go:88] found id: ""
	I0530 21:32:18.170259 2415330 logs.go:284] 0 containers: []
	W0530 21:32:18.170297 2415330 logs.go:286] No container was found matching "coredns"
	I0530 21:32:18.170327 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0530 21:32:18.170452 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0530 21:32:18.204324 2415330 cri.go:88] found id: "7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:32:18.204345 2415330 cri.go:88] found id: ""
	I0530 21:32:18.204352 2415330 logs.go:284] 1 containers: [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f]
	I0530 21:32:18.204408 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:32:18.209871 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0530 21:32:18.209948 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0530 21:32:18.262094 2415330 cri.go:88] found id: ""
	I0530 21:32:18.262125 2415330 logs.go:284] 0 containers: []
	W0530 21:32:18.262133 2415330 logs.go:286] No container was found matching "kube-proxy"
	I0530 21:32:18.262139 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0530 21:32:18.262245 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0530 21:32:18.297184 2415330 cri.go:88] found id: "31d44d7794eaa86ddace326b994080c5e4273ec308edcd1badd87652628f03af"
	I0530 21:32:18.297209 2415330 cri.go:88] found id: ""
	I0530 21:32:18.297216 2415330 logs.go:284] 1 containers: [31d44d7794eaa86ddace326b994080c5e4273ec308edcd1badd87652628f03af]
	I0530 21:32:18.297279 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:32:18.301488 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0530 21:32:18.301554 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0530 21:32:18.334458 2415330 cri.go:88] found id: ""
	I0530 21:32:18.334481 2415330 logs.go:284] 0 containers: []
	W0530 21:32:18.334490 2415330 logs.go:286] No container was found matching "kindnet"
	I0530 21:32:18.334496 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0530 21:32:18.334561 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0530 21:32:18.375864 2415330 cri.go:88] found id: ""
	I0530 21:32:18.375884 2415330 logs.go:284] 0 containers: []
	W0530 21:32:18.375892 2415330 logs.go:286] No container was found matching "storage-provisioner"
	I0530 21:32:18.375905 2415330 logs.go:123] Gathering logs for kube-apiserver [4cfe2a3ff23da4b500092e92cfab3263f91256714704038938e1344b6e54eac1] ...
	I0530 21:32:18.375919 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cfe2a3ff23da4b500092e92cfab3263f91256714704038938e1344b6e54eac1"
	I0530 21:32:18.412932 2415330 logs.go:123] Gathering logs for kube-controller-manager [31d44d7794eaa86ddace326b994080c5e4273ec308edcd1badd87652628f03af] ...
	I0530 21:32:18.412964 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31d44d7794eaa86ddace326b994080c5e4273ec308edcd1badd87652628f03af"
	I0530 21:32:18.473899 2415330 logs.go:123] Gathering logs for containerd ...
	I0530 21:32:18.476030 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0530 21:32:18.552163 2415330 logs.go:123] Gathering logs for container status ...
	I0530 21:32:18.552230 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0530 21:32:18.598482 2415330 logs.go:123] Gathering logs for kubelet ...
	I0530 21:32:18.598508 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0530 21:32:18.684263 2415330 logs.go:123] Gathering logs for describe nodes ...
	I0530 21:32:18.684302 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0530 21:32:18.788136 2415330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0530 21:32:18.788155 2415330 logs.go:123] Gathering logs for etcd [f6a94f00bf998f07ddc6688b019b152e97017884c01064daef49d2f25cdd9232] ...
	I0530 21:32:18.788168 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6a94f00bf998f07ddc6688b019b152e97017884c01064daef49d2f25cdd9232"
	I0530 21:32:18.821109 2415330 logs.go:123] Gathering logs for kube-scheduler [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f] ...
	I0530 21:32:18.821135 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:32:18.893879 2415330 logs.go:123] Gathering logs for dmesg ...
	I0530 21:32:18.893911 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0530 21:32:21.416183 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:32:21.416588 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:32:21.416630 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0530 21:32:21.416690 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0530 21:32:21.458780 2415330 cri.go:88] found id: "4cfe2a3ff23da4b500092e92cfab3263f91256714704038938e1344b6e54eac1"
	I0530 21:32:21.458800 2415330 cri.go:88] found id: ""
	I0530 21:32:21.458808 2415330 logs.go:284] 1 containers: [4cfe2a3ff23da4b500092e92cfab3263f91256714704038938e1344b6e54eac1]
	I0530 21:32:21.458867 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:32:21.464096 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0530 21:32:21.464168 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0530 21:32:21.503714 2415330 cri.go:88] found id: "f6a94f00bf998f07ddc6688b019b152e97017884c01064daef49d2f25cdd9232"
	I0530 21:32:21.503736 2415330 cri.go:88] found id: ""
	I0530 21:32:21.503743 2415330 logs.go:284] 1 containers: [f6a94f00bf998f07ddc6688b019b152e97017884c01064daef49d2f25cdd9232]
	I0530 21:32:21.503856 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:32:21.509331 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0530 21:32:21.509412 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0530 21:32:21.541592 2415330 cri.go:88] found id: ""
	I0530 21:32:21.541614 2415330 logs.go:284] 0 containers: []
	W0530 21:32:21.541622 2415330 logs.go:286] No container was found matching "coredns"
	I0530 21:32:21.541628 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0530 21:32:21.541691 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0530 21:32:21.575295 2415330 cri.go:88] found id: "7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:32:21.575316 2415330 cri.go:88] found id: ""
	I0530 21:32:21.575324 2415330 logs.go:284] 1 containers: [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f]
	I0530 21:32:21.575384 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:32:21.580422 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0530 21:32:21.580499 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0530 21:32:21.610906 2415330 cri.go:88] found id: ""
	I0530 21:32:21.610930 2415330 logs.go:284] 0 containers: []
	W0530 21:32:21.610939 2415330 logs.go:286] No container was found matching "kube-proxy"
	I0530 21:32:21.610945 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0530 21:32:21.611007 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0530 21:32:21.642747 2415330 cri.go:88] found id: "31d44d7794eaa86ddace326b994080c5e4273ec308edcd1badd87652628f03af"
	I0530 21:32:21.642770 2415330 cri.go:88] found id: ""
	I0530 21:32:21.642777 2415330 logs.go:284] 1 containers: [31d44d7794eaa86ddace326b994080c5e4273ec308edcd1badd87652628f03af]
	I0530 21:32:21.642834 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:32:21.648114 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0530 21:32:21.648219 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0530 21:32:21.678845 2415330 cri.go:88] found id: ""
	I0530 21:32:21.678866 2415330 logs.go:284] 0 containers: []
	W0530 21:32:21.678875 2415330 logs.go:286] No container was found matching "kindnet"
	I0530 21:32:21.678880 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0530 21:32:21.678944 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0530 21:32:21.710437 2415330 cri.go:88] found id: ""
	I0530 21:32:21.710457 2415330 logs.go:284] 0 containers: []
	W0530 21:32:21.710465 2415330 logs.go:286] No container was found matching "storage-provisioner"
	I0530 21:32:21.710479 2415330 logs.go:123] Gathering logs for dmesg ...
	I0530 21:32:21.710491 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0530 21:32:21.732455 2415330 logs.go:123] Gathering logs for describe nodes ...
	I0530 21:32:21.732581 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0530 21:32:21.820607 2415330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0530 21:32:21.820626 2415330 logs.go:123] Gathering logs for etcd [f6a94f00bf998f07ddc6688b019b152e97017884c01064daef49d2f25cdd9232] ...
	I0530 21:32:21.820640 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6a94f00bf998f07ddc6688b019b152e97017884c01064daef49d2f25cdd9232"
	I0530 21:32:21.851329 2415330 logs.go:123] Gathering logs for kube-scheduler [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f] ...
	I0530 21:32:21.851359 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:32:21.932027 2415330 logs.go:123] Gathering logs for container status ...
	I0530 21:32:21.932070 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0530 21:32:21.971402 2415330 logs.go:123] Gathering logs for kubelet ...
	I0530 21:32:21.971431 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0530 21:32:22.060809 2415330 logs.go:123] Gathering logs for kube-apiserver [4cfe2a3ff23da4b500092e92cfab3263f91256714704038938e1344b6e54eac1] ...
	I0530 21:32:22.060857 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cfe2a3ff23da4b500092e92cfab3263f91256714704038938e1344b6e54eac1"
	I0530 21:32:22.109352 2415330 logs.go:123] Gathering logs for kube-controller-manager [31d44d7794eaa86ddace326b994080c5e4273ec308edcd1badd87652628f03af] ...
	I0530 21:32:22.109469 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31d44d7794eaa86ddace326b994080c5e4273ec308edcd1badd87652628f03af"
	I0530 21:32:22.163332 2415330 logs.go:123] Gathering logs for containerd ...
	I0530 21:32:22.163370 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0530 21:32:24.738689 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:32:24.739080 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:32:24.739121 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0530 21:32:24.739171 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0530 21:32:24.774039 2415330 cri.go:88] found id: "4cfe2a3ff23da4b500092e92cfab3263f91256714704038938e1344b6e54eac1"
	I0530 21:32:24.774057 2415330 cri.go:88] found id: ""
	I0530 21:32:24.774064 2415330 logs.go:284] 1 containers: [4cfe2a3ff23da4b500092e92cfab3263f91256714704038938e1344b6e54eac1]
	I0530 21:32:24.774123 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:32:24.779535 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0530 21:32:24.779601 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0530 21:32:24.813400 2415330 cri.go:88] found id: "f6a94f00bf998f07ddc6688b019b152e97017884c01064daef49d2f25cdd9232"
	I0530 21:32:24.813419 2415330 cri.go:88] found id: ""
	I0530 21:32:24.813426 2415330 logs.go:284] 1 containers: [f6a94f00bf998f07ddc6688b019b152e97017884c01064daef49d2f25cdd9232]
	I0530 21:32:24.813485 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:32:24.818794 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0530 21:32:24.818866 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0530 21:32:24.854040 2415330 cri.go:88] found id: ""
	I0530 21:32:24.854059 2415330 logs.go:284] 0 containers: []
	W0530 21:32:24.854068 2415330 logs.go:286] No container was found matching "coredns"
	I0530 21:32:24.854074 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0530 21:32:24.854143 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0530 21:32:24.891304 2415330 cri.go:88] found id: "7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:32:24.891323 2415330 cri.go:88] found id: ""
	I0530 21:32:24.891331 2415330 logs.go:284] 1 containers: [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f]
	I0530 21:32:24.891388 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:32:24.896943 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0530 21:32:24.897015 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0530 21:32:24.934034 2415330 cri.go:88] found id: ""
	I0530 21:32:24.934057 2415330 logs.go:284] 0 containers: []
	W0530 21:32:24.934064 2415330 logs.go:286] No container was found matching "kube-proxy"
	I0530 21:32:24.934071 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0530 21:32:24.934134 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0530 21:32:24.970111 2415330 cri.go:88] found id: "31d44d7794eaa86ddace326b994080c5e4273ec308edcd1badd87652628f03af"
	I0530 21:32:24.970197 2415330 cri.go:88] found id: ""
	I0530 21:32:24.970221 2415330 logs.go:284] 1 containers: [31d44d7794eaa86ddace326b994080c5e4273ec308edcd1badd87652628f03af]
	I0530 21:32:24.970315 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:32:24.976042 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0530 21:32:24.976200 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0530 21:32:25.012987 2415330 cri.go:88] found id: ""
	I0530 21:32:25.013063 2415330 logs.go:284] 0 containers: []
	W0530 21:32:25.013089 2415330 logs.go:286] No container was found matching "kindnet"
	I0530 21:32:25.013108 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0530 21:32:25.013223 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0530 21:32:25.048701 2415330 cri.go:88] found id: ""
	I0530 21:32:25.048775 2415330 logs.go:284] 0 containers: []
	W0530 21:32:25.048810 2415330 logs.go:286] No container was found matching "storage-provisioner"
	I0530 21:32:25.048854 2415330 logs.go:123] Gathering logs for dmesg ...
	I0530 21:32:25.048899 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0530 21:32:25.072429 2415330 logs.go:123] Gathering logs for kube-apiserver [4cfe2a3ff23da4b500092e92cfab3263f91256714704038938e1344b6e54eac1] ...
	I0530 21:32:25.072509 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cfe2a3ff23da4b500092e92cfab3263f91256714704038938e1344b6e54eac1"
	I0530 21:32:25.126164 2415330 logs.go:123] Gathering logs for etcd [f6a94f00bf998f07ddc6688b019b152e97017884c01064daef49d2f25cdd9232] ...
	I0530 21:32:25.126245 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6a94f00bf998f07ddc6688b019b152e97017884c01064daef49d2f25cdd9232"
	I0530 21:32:25.163521 2415330 logs.go:123] Gathering logs for kube-scheduler [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f] ...
	I0530 21:32:25.163597 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:32:25.260594 2415330 logs.go:123] Gathering logs for kube-controller-manager [31d44d7794eaa86ddace326b994080c5e4273ec308edcd1badd87652628f03af] ...
	I0530 21:32:25.260671 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31d44d7794eaa86ddace326b994080c5e4273ec308edcd1badd87652628f03af"
	I0530 21:32:25.321678 2415330 logs.go:123] Gathering logs for container status ...
	I0530 21:32:25.321713 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0530 21:32:25.360558 2415330 logs.go:123] Gathering logs for kubelet ...
	I0530 21:32:25.360588 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0530 21:32:25.441685 2415330 logs.go:123] Gathering logs for containerd ...
	I0530 21:32:25.441724 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0530 21:32:25.520780 2415330 logs.go:123] Gathering logs for describe nodes ...
	I0530 21:32:25.520825 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0530 21:32:25.626859 2415330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0530 21:32:28.127667 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:32:28.128041 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:32:28.128081 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0530 21:32:28.128131 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0530 21:32:28.172198 2415330 cri.go:88] found id: "4cfe2a3ff23da4b500092e92cfab3263f91256714704038938e1344b6e54eac1"
	I0530 21:32:28.172217 2415330 cri.go:88] found id: ""
	I0530 21:32:28.172224 2415330 logs.go:284] 1 containers: [4cfe2a3ff23da4b500092e92cfab3263f91256714704038938e1344b6e54eac1]
	I0530 21:32:28.172282 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:32:28.187878 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0530 21:32:28.187955 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0530 21:32:28.223289 2415330 cri.go:88] found id: "f6a94f00bf998f07ddc6688b019b152e97017884c01064daef49d2f25cdd9232"
	I0530 21:32:28.223307 2415330 cri.go:88] found id: ""
	I0530 21:32:28.223315 2415330 logs.go:284] 1 containers: [f6a94f00bf998f07ddc6688b019b152e97017884c01064daef49d2f25cdd9232]
	I0530 21:32:28.223370 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:32:28.229034 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0530 21:32:28.229108 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0530 21:32:28.272710 2415330 cri.go:88] found id: ""
	I0530 21:32:28.272803 2415330 logs.go:284] 0 containers: []
	W0530 21:32:28.272835 2415330 logs.go:286] No container was found matching "coredns"
	I0530 21:32:28.272877 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0530 21:32:28.272976 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0530 21:32:28.309876 2415330 cri.go:88] found id: "7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:32:28.309895 2415330 cri.go:88] found id: ""
	I0530 21:32:28.309902 2415330 logs.go:284] 1 containers: [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f]
	I0530 21:32:28.309960 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:32:28.315010 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0530 21:32:28.315076 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0530 21:32:28.349567 2415330 cri.go:88] found id: ""
	I0530 21:32:28.349587 2415330 logs.go:284] 0 containers: []
	W0530 21:32:28.349595 2415330 logs.go:286] No container was found matching "kube-proxy"
	I0530 21:32:28.349602 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0530 21:32:28.349664 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0530 21:32:28.386332 2415330 cri.go:88] found id: "31d44d7794eaa86ddace326b994080c5e4273ec308edcd1badd87652628f03af"
	I0530 21:32:28.386350 2415330 cri.go:88] found id: ""
	I0530 21:32:28.386357 2415330 logs.go:284] 1 containers: [31d44d7794eaa86ddace326b994080c5e4273ec308edcd1badd87652628f03af]
	I0530 21:32:28.386411 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:32:28.391441 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0530 21:32:28.391575 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0530 21:32:28.427978 2415330 cri.go:88] found id: ""
	I0530 21:32:28.428044 2415330 logs.go:284] 0 containers: []
	W0530 21:32:28.428064 2415330 logs.go:286] No container was found matching "kindnet"
	I0530 21:32:28.428083 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0530 21:32:28.428170 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0530 21:32:28.463356 2415330 cri.go:88] found id: ""
	I0530 21:32:28.463375 2415330 logs.go:284] 0 containers: []
	W0530 21:32:28.463383 2415330 logs.go:286] No container was found matching "storage-provisioner"
	I0530 21:32:28.463396 2415330 logs.go:123] Gathering logs for kube-apiserver [4cfe2a3ff23da4b500092e92cfab3263f91256714704038938e1344b6e54eac1] ...
	I0530 21:32:28.463409 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cfe2a3ff23da4b500092e92cfab3263f91256714704038938e1344b6e54eac1"
	I0530 21:32:28.509555 2415330 logs.go:123] Gathering logs for containerd ...
	I0530 21:32:28.509628 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0530 21:32:28.592943 2415330 logs.go:123] Gathering logs for kubelet ...
	I0530 21:32:28.593028 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0530 21:32:28.682249 2415330 logs.go:123] Gathering logs for dmesg ...
	I0530 21:32:28.682324 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0530 21:32:28.704209 2415330 logs.go:123] Gathering logs for describe nodes ...
	I0530 21:32:28.704381 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0530 21:32:28.825326 2415330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0530 21:32:28.825385 2415330 logs.go:123] Gathering logs for etcd [f6a94f00bf998f07ddc6688b019b152e97017884c01064daef49d2f25cdd9232] ...
	I0530 21:32:28.825465 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6a94f00bf998f07ddc6688b019b152e97017884c01064daef49d2f25cdd9232"
	I0530 21:32:28.860313 2415330 logs.go:123] Gathering logs for kube-scheduler [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f] ...
	I0530 21:32:28.860391 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:32:28.959209 2415330 logs.go:123] Gathering logs for kube-controller-manager [31d44d7794eaa86ddace326b994080c5e4273ec308edcd1badd87652628f03af] ...
	I0530 21:32:28.959303 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31d44d7794eaa86ddace326b994080c5e4273ec308edcd1badd87652628f03af"
	I0530 21:32:29.022813 2415330 logs.go:123] Gathering logs for container status ...
	I0530 21:32:29.022892 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0530 21:32:31.607131 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:32:31.607490 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:32:31.607528 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0530 21:32:31.607578 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0530 21:32:31.694622 2415330 cri.go:88] found id: "4cfe2a3ff23da4b500092e92cfab3263f91256714704038938e1344b6e54eac1"
	I0530 21:32:31.694641 2415330 cri.go:88] found id: ""
	I0530 21:32:31.694648 2415330 logs.go:284] 1 containers: [4cfe2a3ff23da4b500092e92cfab3263f91256714704038938e1344b6e54eac1]
	I0530 21:32:31.694788 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:32:31.700433 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0530 21:32:31.700502 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0530 21:32:31.791810 2415330 cri.go:88] found id: "f6a94f00bf998f07ddc6688b019b152e97017884c01064daef49d2f25cdd9232"
	I0530 21:32:31.791828 2415330 cri.go:88] found id: ""
	I0530 21:32:31.791836 2415330 logs.go:284] 1 containers: [f6a94f00bf998f07ddc6688b019b152e97017884c01064daef49d2f25cdd9232]
	I0530 21:32:31.791898 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:32:31.797826 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0530 21:32:31.797897 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0530 21:32:31.840994 2415330 cri.go:88] found id: ""
	I0530 21:32:31.841020 2415330 logs.go:284] 0 containers: []
	W0530 21:32:31.841029 2415330 logs.go:286] No container was found matching "coredns"
	I0530 21:32:31.841035 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0530 21:32:31.841130 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0530 21:32:31.917779 2415330 cri.go:88] found id: "7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:32:31.917800 2415330 cri.go:88] found id: ""
	I0530 21:32:31.917808 2415330 logs.go:284] 1 containers: [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f]
	I0530 21:32:31.917863 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:32:31.927565 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0530 21:32:31.927648 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0530 21:32:31.992193 2415330 cri.go:88] found id: ""
	I0530 21:32:31.992214 2415330 logs.go:284] 0 containers: []
	W0530 21:32:31.992222 2415330 logs.go:286] No container was found matching "kube-proxy"
	I0530 21:32:31.992228 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0530 21:32:31.992312 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0530 21:32:32.044694 2415330 cri.go:88] found id: "31d44d7794eaa86ddace326b994080c5e4273ec308edcd1badd87652628f03af"
	I0530 21:32:32.044774 2415330 cri.go:88] found id: ""
	I0530 21:32:32.044805 2415330 logs.go:284] 1 containers: [31d44d7794eaa86ddace326b994080c5e4273ec308edcd1badd87652628f03af]
	I0530 21:32:32.044947 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:32:32.053217 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0530 21:32:32.053332 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0530 21:32:32.120174 2415330 cri.go:88] found id: ""
	I0530 21:32:32.120195 2415330 logs.go:284] 0 containers: []
	W0530 21:32:32.120202 2415330 logs.go:286] No container was found matching "kindnet"
	I0530 21:32:32.120211 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0530 21:32:32.120278 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0530 21:32:32.244253 2415330 cri.go:88] found id: ""
	I0530 21:32:32.244272 2415330 logs.go:284] 0 containers: []
	W0530 21:32:32.244280 2415330 logs.go:286] No container was found matching "storage-provisioner"
	I0530 21:32:32.244295 2415330 logs.go:123] Gathering logs for dmesg ...
	I0530 21:32:32.244308 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0530 21:32:32.283419 2415330 logs.go:123] Gathering logs for describe nodes ...
	I0530 21:32:32.283519 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0530 21:32:32.514616 2415330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0530 21:32:32.514633 2415330 logs.go:123] Gathering logs for kube-scheduler [7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f] ...
	I0530 21:32:32.514646 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a5c250e12aa7b3d80bdff858ac5421f8a6820ccff2ab47bb0340a3709c1656f"
	I0530 21:32:32.725373 2415330 logs.go:123] Gathering logs for kube-controller-manager [31d44d7794eaa86ddace326b994080c5e4273ec308edcd1badd87652628f03af] ...
	I0530 21:32:32.725465 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31d44d7794eaa86ddace326b994080c5e4273ec308edcd1badd87652628f03af"
	I0530 21:32:32.868562 2415330 logs.go:123] Gathering logs for container status ...
	I0530 21:32:32.868645 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0530 21:32:33.002321 2415330 logs.go:123] Gathering logs for kubelet ...
	I0530 21:32:33.002401 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0530 21:32:33.197850 2415330 logs.go:123] Gathering logs for kube-apiserver [4cfe2a3ff23da4b500092e92cfab3263f91256714704038938e1344b6e54eac1] ...
	I0530 21:32:33.197933 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cfe2a3ff23da4b500092e92cfab3263f91256714704038938e1344b6e54eac1"
	I0530 21:32:33.284596 2415330 logs.go:123] Gathering logs for etcd [f6a94f00bf998f07ddc6688b019b152e97017884c01064daef49d2f25cdd9232] ...
	I0530 21:32:33.284684 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6a94f00bf998f07ddc6688b019b152e97017884c01064daef49d2f25cdd9232"
	I0530 21:32:33.404571 2415330 logs.go:123] Gathering logs for containerd ...
	I0530 21:32:33.404642 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0530 21:32:36.127549 2415330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0530 21:32:36.127890 2415330 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0530 21:32:36.127933 2415330 kubeadm.go:640] restartCluster took 4m16.548335357s
	W0530 21:32:36.128028 2415330 out.go:239] ! Unable to restart cluster, will reset it: apiserver health: apiserver healthz never reported healthy: context deadline exceeded
	! Unable to restart cluster, will reset it: apiserver health: apiserver healthz never reported healthy: context deadline exceeded
	I0530 21:32:36.128050 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0530 21:32:39.030202 2415330 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.902132608s)
	I0530 21:32:39.030275 2415330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0530 21:32:39.051315 2415330 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0530 21:32:39.063736 2415330 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0530 21:32:39.063816 2415330 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0530 21:32:39.077004 2415330 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0530 21:32:39.077041 2415330 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0530 21:32:39.202379 2415330 kubeadm.go:322] [init] Using Kubernetes version: v1.21.2
	I0530 21:32:39.203113 2415330 kubeadm.go:322] [preflight] Running pre-flight checks
	I0530 21:32:39.269623 2415330 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0530 21:32:39.269689 2415330 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1036-aws
	I0530 21:32:39.269722 2415330 kubeadm.go:322] OS: Linux
	I0530 21:32:39.269765 2415330 kubeadm.go:322] CGROUPS_CPU: enabled
	I0530 21:32:39.269811 2415330 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0530 21:32:39.269856 2415330 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0530 21:32:39.269902 2415330 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0530 21:32:39.269948 2415330 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0530 21:32:39.269993 2415330 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0530 21:32:39.270035 2415330 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0530 21:32:39.270081 2415330 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0530 21:32:39.471799 2415330 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0530 21:32:39.471904 2415330 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0530 21:32:39.471993 2415330 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0530 21:32:39.677400 2415330 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0530 21:32:39.694036 2415330 out.go:204]   - Generating certificates and keys ...
	I0530 21:32:39.694188 2415330 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0530 21:32:39.697259 2415330 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0530 21:32:39.697382 2415330 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0530 21:32:39.697446 2415330 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0530 21:32:39.697516 2415330 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0530 21:32:39.697570 2415330 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0530 21:32:39.697634 2415330 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0530 21:32:39.697700 2415330 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0530 21:32:39.697782 2415330 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0530 21:32:39.697856 2415330 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0530 21:32:39.697894 2415330 kubeadm.go:322] [certs] Using the existing "sa" key
	I0530 21:32:39.697956 2415330 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0530 21:32:40.067544 2415330 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0530 21:32:41.069078 2415330 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0530 21:32:41.567500 2415330 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0530 21:32:42.485331 2415330 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0530 21:32:42.508661 2415330 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0530 21:32:42.513864 2415330 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0530 21:32:42.513922 2415330 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0530 21:32:42.646266 2415330 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0530 21:32:42.649362 2415330 out.go:204]   - Booting up control plane ...
	I0530 21:32:42.649469 2415330 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0530 21:32:42.678714 2415330 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0530 21:32:42.680428 2415330 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0530 21:32:42.682563 2415330 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0530 21:32:42.686644 2415330 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0530 21:33:22.686890 2415330 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0530 21:36:42.690010 2415330 kubeadm.go:322] 
	I0530 21:36:42.690072 2415330 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0530 21:36:42.690114 2415330 kubeadm.go:322] 		timed out waiting for the condition
	I0530 21:36:42.690119 2415330 kubeadm.go:322] 
	I0530 21:36:42.690152 2415330 kubeadm.go:322] 	This error is likely caused by:
	I0530 21:36:42.690184 2415330 kubeadm.go:322] 		- The kubelet is not running
	I0530 21:36:42.690284 2415330 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0530 21:36:42.690290 2415330 kubeadm.go:322] 
	I0530 21:36:42.690390 2415330 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0530 21:36:42.690421 2415330 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0530 21:36:42.690452 2415330 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0530 21:36:42.690456 2415330 kubeadm.go:322] 
	I0530 21:36:42.690556 2415330 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0530 21:36:42.690634 2415330 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0530 21:36:42.690640 2415330 kubeadm.go:322] 
	I0530 21:36:42.690738 2415330 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0530 21:36:42.690829 2415330 kubeadm.go:322] 		- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
	I0530 21:36:42.690901 2415330 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0530 21:36:42.690977 2415330 kubeadm.go:322] 		- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	I0530 21:36:42.690983 2415330 kubeadm.go:322] 
	I0530 21:36:42.693824 2415330 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1036-aws\n", err: exit status 1
	I0530 21:36:42.693992 2415330 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0530 21:36:42.694081 2415330 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0530 21:36:42.694143 2415330 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0530 21:36:42.694376 2415330 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.21.2
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1036-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1036-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.21.2
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1036-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1036-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0530 21:36:42.694425 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0530 21:36:43.727343 2415330 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.032872557s)
	I0530 21:36:43.727416 2415330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0530 21:36:43.741013 2415330 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0530 21:36:43.741076 2415330 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0530 21:36:43.752226 2415330 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0530 21:36:43.752271 2415330 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0530 21:36:43.828053 2415330 kubeadm.go:322] [init] Using Kubernetes version: v1.21.2
	I0530 21:36:43.828492 2415330 kubeadm.go:322] [preflight] Running pre-flight checks
	I0530 21:36:43.867136 2415330 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0530 21:36:43.867205 2415330 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1036-aws
	I0530 21:36:43.867245 2415330 kubeadm.go:322] OS: Linux
	I0530 21:36:43.867295 2415330 kubeadm.go:322] CGROUPS_CPU: enabled
	I0530 21:36:43.867344 2415330 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0530 21:36:43.867393 2415330 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0530 21:36:43.867441 2415330 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0530 21:36:43.867491 2415330 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0530 21:36:43.867542 2415330 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0530 21:36:43.867587 2415330 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0530 21:36:43.867636 2415330 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0530 21:36:43.974219 2415330 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0530 21:36:43.974326 2415330 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0530 21:36:43.974420 2415330 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0530 21:36:44.136974 2415330 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0530 21:36:44.140567 2415330 out.go:204]   - Generating certificates and keys ...
	I0530 21:36:44.140694 2415330 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0530 21:36:44.140820 2415330 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0530 21:36:44.140972 2415330 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0530 21:36:44.141036 2415330 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0530 21:36:44.141121 2415330 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0530 21:36:44.141177 2415330 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0530 21:36:44.141238 2415330 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0530 21:36:44.141296 2415330 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0530 21:36:44.141404 2415330 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0530 21:36:44.141473 2415330 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0530 21:36:44.141510 2415330 kubeadm.go:322] [certs] Using the existing "sa" key
	I0530 21:36:44.141563 2415330 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0530 21:36:44.593139 2415330 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0530 21:36:45.482111 2415330 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0530 21:36:45.737482 2415330 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0530 21:36:45.891797 2415330 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0530 21:36:45.911837 2415330 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0530 21:36:45.911960 2415330 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0530 21:36:45.912007 2415330 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0530 21:36:46.033686 2415330 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0530 21:36:46.036807 2415330 out.go:204]   - Booting up control plane ...
	I0530 21:36:46.036912 2415330 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0530 21:36:46.044383 2415330 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0530 21:36:46.045852 2415330 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0530 21:36:46.047307 2415330 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0530 21:36:46.054526 2415330 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0530 21:37:26.055224 2415330 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0530 21:40:46.056060 2415330 kubeadm.go:322] 
	I0530 21:40:46.056120 2415330 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0530 21:40:46.056161 2415330 kubeadm.go:322] 		timed out waiting for the condition
	I0530 21:40:46.056166 2415330 kubeadm.go:322] 
	I0530 21:40:46.056199 2415330 kubeadm.go:322] 	This error is likely caused by:
	I0530 21:40:46.056231 2415330 kubeadm.go:322] 		- The kubelet is not running
	I0530 21:40:46.056330 2415330 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0530 21:40:46.056337 2415330 kubeadm.go:322] 
	I0530 21:40:46.056436 2415330 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0530 21:40:46.056467 2415330 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0530 21:40:46.056498 2415330 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0530 21:40:46.056503 2415330 kubeadm.go:322] 
	I0530 21:40:46.056601 2415330 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0530 21:40:46.056681 2415330 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0530 21:40:46.056685 2415330 kubeadm.go:322] 
	I0530 21:40:46.056791 2415330 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0530 21:40:46.056883 2415330 kubeadm.go:322] 		- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
	I0530 21:40:46.056955 2415330 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0530 21:40:46.057030 2415330 kubeadm.go:322] 		- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	I0530 21:40:46.057035 2415330 kubeadm.go:322] 
	I0530 21:40:46.059589 2415330 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1036-aws\n", err: exit status 1
	I0530 21:40:46.059712 2415330 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0530 21:40:46.059798 2415330 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0530 21:40:46.059868 2415330 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0530 21:40:46.059923 2415330 kubeadm.go:406] StartCluster complete in 12m26.529872856s
	I0530 21:40:46.060022 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0530 21:40:46.060084 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0530 21:40:46.098801 2415330 cri.go:88] found id: "b815cdadce9f32c4c95175b174f9c5f462bf5e3803218568f858bda9d19b0100"
	I0530 21:40:46.098829 2415330 cri.go:88] found id: ""
	I0530 21:40:46.098836 2415330 logs.go:284] 1 containers: [b815cdadce9f32c4c95175b174f9c5f462bf5e3803218568f858bda9d19b0100]
	I0530 21:40:46.098906 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:40:46.109599 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0530 21:40:46.109673 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0530 21:40:46.146404 2415330 cri.go:88] found id: "1d7a7b0d022d62ca18b3c87f6e909049a7ac38f12ec399e804bbcd4292a4f814"
	I0530 21:40:46.146429 2415330 cri.go:88] found id: ""
	I0530 21:40:46.146444 2415330 logs.go:284] 1 containers: [1d7a7b0d022d62ca18b3c87f6e909049a7ac38f12ec399e804bbcd4292a4f814]
	I0530 21:40:46.146508 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:40:46.151335 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0530 21:40:46.151409 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0530 21:40:46.183576 2415330 cri.go:88] found id: ""
	I0530 21:40:46.183603 2415330 logs.go:284] 0 containers: []
	W0530 21:40:46.183611 2415330 logs.go:286] No container was found matching "coredns"
	I0530 21:40:46.183617 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0530 21:40:46.183686 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0530 21:40:46.214023 2415330 cri.go:88] found id: "6cdac915b19cfe753d25a2ab3dc36aeff687c0206d3db7a1556f6fc8a1c32911"
	I0530 21:40:46.214085 2415330 cri.go:88] found id: ""
	I0530 21:40:46.214099 2415330 logs.go:284] 1 containers: [6cdac915b19cfe753d25a2ab3dc36aeff687c0206d3db7a1556f6fc8a1c32911]
	I0530 21:40:46.214157 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:40:46.218659 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0530 21:40:46.218730 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0530 21:40:46.253904 2415330 cri.go:88] found id: ""
	I0530 21:40:46.253925 2415330 logs.go:284] 0 containers: []
	W0530 21:40:46.253933 2415330 logs.go:286] No container was found matching "kube-proxy"
	I0530 21:40:46.253939 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0530 21:40:46.254002 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0530 21:40:46.284727 2415330 cri.go:88] found id: "05b89d815bb9d8f5a94d1c72f2f5fc5fe8933718c3d1706353b5dd2f954d83c9"
	I0530 21:40:46.284746 2415330 cri.go:88] found id: ""
	I0530 21:40:46.284754 2415330 logs.go:284] 1 containers: [05b89d815bb9d8f5a94d1c72f2f5fc5fe8933718c3d1706353b5dd2f954d83c9]
	I0530 21:40:46.284809 2415330 ssh_runner.go:195] Run: which crictl
	I0530 21:40:46.289695 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0530 21:40:46.289772 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0530 21:40:46.338613 2415330 cri.go:88] found id: ""
	I0530 21:40:46.338636 2415330 logs.go:284] 0 containers: []
	W0530 21:40:46.338644 2415330 logs.go:286] No container was found matching "kindnet"
	I0530 21:40:46.338650 2415330 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0530 21:40:46.338711 2415330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0530 21:40:46.382702 2415330 cri.go:88] found id: ""
	I0530 21:40:46.382722 2415330 logs.go:284] 0 containers: []
	W0530 21:40:46.382730 2415330 logs.go:286] No container was found matching "storage-provisioner"
	I0530 21:40:46.382744 2415330 logs.go:123] Gathering logs for container status ...
	I0530 21:40:46.382756 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0530 21:40:46.425531 2415330 logs.go:123] Gathering logs for kubelet ...
	I0530 21:40:46.425609 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0530 21:40:46.517046 2415330 logs.go:123] Gathering logs for kube-apiserver [b815cdadce9f32c4c95175b174f9c5f462bf5e3803218568f858bda9d19b0100] ...
	I0530 21:40:46.517124 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b815cdadce9f32c4c95175b174f9c5f462bf5e3803218568f858bda9d19b0100"
	I0530 21:40:46.569437 2415330 logs.go:123] Gathering logs for etcd [1d7a7b0d022d62ca18b3c87f6e909049a7ac38f12ec399e804bbcd4292a4f814] ...
	I0530 21:40:46.569518 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d7a7b0d022d62ca18b3c87f6e909049a7ac38f12ec399e804bbcd4292a4f814"
	I0530 21:40:46.616048 2415330 logs.go:123] Gathering logs for kube-controller-manager [05b89d815bb9d8f5a94d1c72f2f5fc5fe8933718c3d1706353b5dd2f954d83c9] ...
	I0530 21:40:46.616118 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05b89d815bb9d8f5a94d1c72f2f5fc5fe8933718c3d1706353b5dd2f954d83c9"
	I0530 21:40:46.677548 2415330 logs.go:123] Gathering logs for containerd ...
	I0530 21:40:46.677621 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0530 21:40:46.763596 2415330 logs.go:123] Gathering logs for dmesg ...
	I0530 21:40:46.763752 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0530 21:40:46.786392 2415330 logs.go:123] Gathering logs for describe nodes ...
	I0530 21:40:46.786555 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0530 21:40:46.887789 2415330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0530 21:40:46.887809 2415330 logs.go:123] Gathering logs for kube-scheduler [6cdac915b19cfe753d25a2ab3dc36aeff687c0206d3db7a1556f6fc8a1c32911] ...
	I0530 21:40:46.887823 2415330 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6cdac915b19cfe753d25a2ab3dc36aeff687c0206d3db7a1556f6fc8a1c32911"
	W0530 21:40:46.993590 2415330 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.21.2
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1036-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1036-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0530 21:40:46.993639 2415330 out.go:239] * 
	* 
	W0530 21:40:46.993851 2415330 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.21.2
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1036-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1036-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.21.2
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1036-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1036-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0530 21:40:46.993883 2415330 out.go:239] * 
	* 
	W0530 21:40:46.995512 2415330 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0530 21:40:47.000087 2415330 out.go:177] 
	W0530 21:40:47.005455 2415330 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.21.2
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1036-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1036-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.21.2
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1036-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1036-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0530 21:40:47.005952 2415330 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0530 21:40:47.007054 2415330 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0530 21:40:47.009429 2415330 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:212: upgrade from v1.22.0 to HEAD failed: out/minikube-linux-arm64 start -p stopped-upgrade-708012 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: exit status 109
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (886.44s)

                                                
                                    

Test pass (264/302)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 13.96
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.07
10 TestDownloadOnly/v1.27.2/json-events 9.75
11 TestDownloadOnly/v1.27.2/preload-exists 0
15 TestDownloadOnly/v1.27.2/LogsDuration 0.07
16 TestDownloadOnly/DeleteAll 0.24
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.14
19 TestBinaryMirror 0.6
22 TestAddons/Setup 140.09
26 TestAddons/parallel/InspektorGadget 10.62
27 TestAddons/parallel/MetricsServer 5.62
30 TestAddons/parallel/CSI 57.86
31 TestAddons/parallel/Headlamp 24.59
32 TestAddons/parallel/CloudSpanner 5.61
35 TestAddons/serial/GCPAuth/Namespaces 0.18
36 TestAddons/StoppedEnableDisable 12.32
37 TestCertOptions 35.53
38 TestCertExpiration 233.63
40 TestForceSystemdFlag 38.26
41 TestForceSystemdEnv 35.98
46 TestErrorSpam/setup 30
47 TestErrorSpam/start 0.81
48 TestErrorSpam/status 1.13
49 TestErrorSpam/pause 1.85
50 TestErrorSpam/unpause 2.16
51 TestErrorSpam/stop 1.47
54 TestFunctional/serial/CopySyncFile 0
55 TestFunctional/serial/StartWithProxy 64.38
56 TestFunctional/serial/AuditLog 0
57 TestFunctional/serial/SoftStart 17.56
58 TestFunctional/serial/KubeContext 0.07
59 TestFunctional/serial/KubectlGetPods 0.11
62 TestFunctional/serial/CacheCmd/cache/add_remote 4.48
63 TestFunctional/serial/CacheCmd/cache/add_local 1.48
64 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
65 TestFunctional/serial/CacheCmd/cache/list 0.06
66 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
67 TestFunctional/serial/CacheCmd/cache/cache_reload 2.38
68 TestFunctional/serial/CacheCmd/cache/delete 0.12
69 TestFunctional/serial/MinikubeKubectlCmd 0.17
70 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
71 TestFunctional/serial/ExtraConfig 57
72 TestFunctional/serial/ComponentHealth 0.11
73 TestFunctional/serial/LogsCmd 1.94
74 TestFunctional/serial/LogsFileCmd 1.85
76 TestFunctional/parallel/ConfigCmd 0.47
77 TestFunctional/parallel/DashboardCmd 8.73
78 TestFunctional/parallel/DryRun 0.48
79 TestFunctional/parallel/InternationalLanguage 0.21
80 TestFunctional/parallel/StatusCmd 1.33
84 TestFunctional/parallel/ServiceCmdConnect 6.72
85 TestFunctional/parallel/AddonsCmd 0.17
86 TestFunctional/parallel/PersistentVolumeClaim 39.83
88 TestFunctional/parallel/SSHCmd 0.82
89 TestFunctional/parallel/CpCmd 1.44
91 TestFunctional/parallel/FileSync 0.43
92 TestFunctional/parallel/CertSync 2.17
96 TestFunctional/parallel/NodeLabels 0.09
98 TestFunctional/parallel/NonActiveRuntimeDisabled 0.78
100 TestFunctional/parallel/License 0.36
101 TestFunctional/parallel/Version/short 0.06
102 TestFunctional/parallel/Version/components 1.57
103 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
104 TestFunctional/parallel/ImageCommands/ImageListTable 0.3
105 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
106 TestFunctional/parallel/ImageCommands/ImageListYaml 0.33
107 TestFunctional/parallel/ImageCommands/ImageBuild 3.66
108 TestFunctional/parallel/ImageCommands/Setup 1.96
109 TestFunctional/parallel/UpdateContextCmd/no_changes 0.18
110 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.21
111 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.18
113 TestFunctional/parallel/ServiceCmd/DeployApp 11.39
116 TestFunctional/parallel/ServiceCmd/List 0.53
117 TestFunctional/parallel/ServiceCmd/JSONOutput 0.45
118 TestFunctional/parallel/ServiceCmd/HTTPS 0.54
120 TestFunctional/parallel/ServiceCmd/Format 0.56
121 TestFunctional/parallel/ImageCommands/ImageRemove 0.68
122 TestFunctional/parallel/ServiceCmd/URL 0.51
124 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.71
126 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.64
127 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
129 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.47
130 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
131 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
135 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
136 TestFunctional/parallel/ProfileCmd/profile_not_create 0.52
137 TestFunctional/parallel/ProfileCmd/profile_list 0.43
138 TestFunctional/parallel/ProfileCmd/profile_json_output 0.43
139 TestFunctional/parallel/MountCmd/any-port 6.97
140 TestFunctional/parallel/MountCmd/specific-port 2.24
141 TestFunctional/parallel/MountCmd/VerifyCleanup 2.1
142 TestFunctional/delete_addon-resizer_images 0.09
143 TestFunctional/delete_my-image_image 0.02
144 TestFunctional/delete_minikube_cached_images 0.02
148 TestIngressAddonLegacy/StartLegacyK8sCluster 102.22
150 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 9.81
151 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.45
155 TestJSONOutput/start/Command 65.19
156 TestJSONOutput/start/Audit 0
158 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
159 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
161 TestJSONOutput/pause/Command 0.84
162 TestJSONOutput/pause/Audit 0
164 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
165 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
167 TestJSONOutput/unpause/Command 0.77
168 TestJSONOutput/unpause/Audit 0
170 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
171 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
173 TestJSONOutput/stop/Command 5.88
174 TestJSONOutput/stop/Audit 0
176 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
177 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
178 TestErrorJSONOutput 0.24
180 TestKicCustomNetwork/create_custom_network 40.77
181 TestKicCustomNetwork/use_default_bridge_network 38.32
182 TestKicExistingNetwork 36.61
183 TestKicCustomSubnet 35.41
184 TestKicStaticIP 34.1
185 TestMainNoArgs 0.05
186 TestMinikubeProfile 71.47
189 TestMountStart/serial/StartWithMountFirst 6.47
190 TestMountStart/serial/VerifyMountFirst 0.29
191 TestMountStart/serial/StartWithMountSecond 6.78
192 TestMountStart/serial/VerifyMountSecond 0.28
193 TestMountStart/serial/DeleteFirst 1.73
194 TestMountStart/serial/VerifyMountPostDelete 0.28
195 TestMountStart/serial/Stop 1.24
196 TestMountStart/serial/RestartStopped 7.41
197 TestMountStart/serial/VerifyMountPostStop 0.28
200 TestMultiNode/serial/FreshStart2Nodes 89.45
201 TestMultiNode/serial/DeployApp2Nodes 5.13
202 TestMultiNode/serial/PingHostFrom2Pods 1.15
203 TestMultiNode/serial/AddNode 21.54
204 TestMultiNode/serial/ProfileList 0.35
205 TestMultiNode/serial/CopyFile 10.8
206 TestMultiNode/serial/StopNode 2.37
207 TestMultiNode/serial/StartAfterStop 20.59
208 TestMultiNode/serial/RestartKeepsNodes 139.96
209 TestMultiNode/serial/DeleteNode 5.21
210 TestMultiNode/serial/StopMultiNode 24.36
211 TestMultiNode/serial/RestartMultiNode 87.61
212 TestMultiNode/serial/ValidateNameConflict 35.35
217 TestPreload 179.35
219 TestScheduledStopUnix 111.87
222 TestInsufficientStorage 13
223 TestRunningBinaryUpgrade 117.2
225 TestKubernetesUpgrade 424.78
229 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
230 TestPause/serial/Start 92.4
231 TestNoKubernetes/serial/StartWithK8s 47.32
232 TestNoKubernetes/serial/StartWithStopK8s 16.57
233 TestNoKubernetes/serial/Start 5.79
234 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
235 TestNoKubernetes/serial/ProfileList 0.96
236 TestNoKubernetes/serial/Stop 1.24
237 TestNoKubernetes/serial/StartNoArgs 6.63
238 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.3
239 TestPause/serial/SecondStartNoReconfiguration 12.37
240 TestPause/serial/Pause 0.99
241 TestPause/serial/VerifyStatus 0.38
242 TestPause/serial/Unpause 1.01
243 TestPause/serial/PauseAgain 1.11
244 TestPause/serial/DeletePaused 2.68
245 TestPause/serial/VerifyDeletedResources 0.16
246 TestStoppedBinaryUpgrade/Setup 1.37
262 TestNetworkPlugins/group/false 3.9
267 TestStartStop/group/old-k8s-version/serial/FirstStart 127.35
268 TestStoppedBinaryUpgrade/MinikubeLogs 1.11
270 TestStartStop/group/no-preload/serial/FirstStart 75.4
271 TestStartStop/group/old-k8s-version/serial/DeployApp 9.74
272 TestStartStop/group/no-preload/serial/DeployApp 9.72
273 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.23
274 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.86
275 TestStartStop/group/old-k8s-version/serial/Stop 12.53
276 TestStartStop/group/no-preload/serial/Stop 12.42
277 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
278 TestStartStop/group/old-k8s-version/serial/SecondStart 680.4
279 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.24
280 TestStartStop/group/no-preload/serial/SecondStart 349.83
281 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 9.03
282 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.15
283 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.44
284 TestStartStop/group/no-preload/serial/Pause 3.85
286 TestStartStop/group/embed-certs/serial/FirstStart 69.97
287 TestStartStop/group/embed-certs/serial/DeployApp 8.52
288 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.16
289 TestStartStop/group/embed-certs/serial/Stop 12.29
290 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
291 TestStartStop/group/embed-certs/serial/SecondStart 353.08
292 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
293 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.12
294 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.36
295 TestStartStop/group/old-k8s-version/serial/Pause 3.35
297 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 83.91
298 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.78
299 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.15
300 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.56
301 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
302 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 348.73
303 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 10.03
304 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.2
305 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.48
306 TestStartStop/group/embed-certs/serial/Pause 5.24
308 TestStartStop/group/newest-cni/serial/FirstStart 43.17
309 TestStartStop/group/newest-cni/serial/DeployApp 0
310 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.06
311 TestStartStop/group/newest-cni/serial/Stop 1.29
312 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
313 TestStartStop/group/newest-cni/serial/SecondStart 44.88
314 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
315 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
316 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.35
317 TestStartStop/group/newest-cni/serial/Pause 3.53
318 TestNetworkPlugins/group/auto/Start 86.77
319 TestNetworkPlugins/group/auto/KubeletFlags 0.3
320 TestNetworkPlugins/group/auto/NetCatPod 9.43
321 TestNetworkPlugins/group/auto/DNS 0.2
322 TestNetworkPlugins/group/auto/Localhost 0.19
323 TestNetworkPlugins/group/auto/HairPin 0.19
324 TestNetworkPlugins/group/kindnet/Start 55.76
325 TestNetworkPlugins/group/kindnet/ControllerPod 5.03
326 TestNetworkPlugins/group/kindnet/KubeletFlags 0.41
327 TestNetworkPlugins/group/kindnet/NetCatPod 10.44
328 TestNetworkPlugins/group/kindnet/DNS 0.25
329 TestNetworkPlugins/group/kindnet/Localhost 0.24
330 TestNetworkPlugins/group/kindnet/HairPin 0.2
331 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 17.03
332 TestNetworkPlugins/group/calico/Start 82.63
333 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.14
334 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.47
335 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.45
336 TestNetworkPlugins/group/custom-flannel/Start 73.93
337 TestNetworkPlugins/group/calico/ControllerPod 5.03
338 TestNetworkPlugins/group/calico/KubeletFlags 0.33
339 TestNetworkPlugins/group/calico/NetCatPod 9.56
340 TestNetworkPlugins/group/calico/DNS 0.29
341 TestNetworkPlugins/group/calico/Localhost 0.2
342 TestNetworkPlugins/group/calico/HairPin 0.22
343 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.32
344 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.42
345 TestNetworkPlugins/group/custom-flannel/DNS 0.31
346 TestNetworkPlugins/group/custom-flannel/Localhost 0.27
347 TestNetworkPlugins/group/custom-flannel/HairPin 0.27
348 TestNetworkPlugins/group/enable-default-cni/Start 90.78
349 TestNetworkPlugins/group/flannel/Start 74.09
350 TestNetworkPlugins/group/flannel/ControllerPod 5.03
351 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.32
352 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.46
353 TestNetworkPlugins/group/flannel/KubeletFlags 0.4
354 TestNetworkPlugins/group/flannel/NetCatPod 9.51
355 TestNetworkPlugins/group/enable-default-cni/DNS 0.23
356 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
357 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
358 TestNetworkPlugins/group/flannel/DNS 0.24
359 TestNetworkPlugins/group/flannel/Localhost 0.18
360 TestNetworkPlugins/group/flannel/HairPin 0.2
361 TestNetworkPlugins/group/bridge/Start 83.15
362 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
363 TestNetworkPlugins/group/bridge/NetCatPod 10.37
364 TestNetworkPlugins/group/bridge/DNS 0.24
365 TestNetworkPlugins/group/bridge/Localhost 0.2
366 TestNetworkPlugins/group/bridge/HairPin 0.17
x
+
TestDownloadOnly/v1.16.0/json-events (13.96s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-942566 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-942566 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (13.962494066s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (13.96s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-942566
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-942566: exit status 85 (71.263911ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-942566 | jenkins | v1.30.1 | 30 May 23 20:50 UTC |          |
	|         | -p download-only-942566        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/05/30 20:50:49
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.20.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0530 20:50:49.000150 2294298 out.go:296] Setting OutFile to fd 1 ...
	I0530 20:50:49.000634 2294298 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 20:50:49.000711 2294298 out.go:309] Setting ErrFile to fd 2...
	I0530 20:50:49.000721 2294298 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 20:50:49.000979 2294298 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16597-2288886/.minikube/bin
	W0530 20:50:49.001169 2294298 root.go:312] Error reading config file at /home/jenkins/minikube-integration/16597-2288886/.minikube/config/config.json: open /home/jenkins/minikube-integration/16597-2288886/.minikube/config/config.json: no such file or directory
	I0530 20:50:49.001703 2294298 out.go:303] Setting JSON to true
	I0530 20:50:49.002797 2294298 start.go:125] hostinfo: {"hostname":"ip-172-31-31-251","uptime":174748,"bootTime":1685305101,"procs":319,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1036-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0530 20:50:49.002927 2294298 start.go:135] virtualization:  
	I0530 20:50:49.006583 2294298 out.go:97] [download-only-942566] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	I0530 20:50:49.008642 2294298 out.go:169] MINIKUBE_LOCATION=16597
	W0530 20:50:49.006797 2294298 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/16597-2288886/.minikube/cache/preloaded-tarball: no such file or directory
	I0530 20:50:49.006878 2294298 notify.go:220] Checking for updates...
	I0530 20:50:49.010943 2294298 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0530 20:50:49.013060 2294298 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/16597-2288886/kubeconfig
	I0530 20:50:49.015008 2294298 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/16597-2288886/.minikube
	I0530 20:50:49.017078 2294298 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0530 20:50:49.020841 2294298 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0530 20:50:49.021163 2294298 driver.go:375] Setting default libvirt URI to qemu:///system
	I0530 20:50:49.046851 2294298 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0530 20:50:49.046929 2294298 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0530 20:50:49.114904 2294298 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2023-05-30 20:50:49.105047143 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1036-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0530 20:50:49.115036 2294298 docker.go:294] overlay module found
	I0530 20:50:49.117122 2294298 out.go:97] Using the docker driver based on user configuration
	I0530 20:50:49.117146 2294298 start.go:295] selected driver: docker
	I0530 20:50:49.117158 2294298 start.go:870] validating driver "docker" against <nil>
	I0530 20:50:49.117265 2294298 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0530 20:50:49.178102 2294298 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2023-05-30 20:50:49.168421111 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1036-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0530 20:50:49.178250 2294298 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0530 20:50:49.178574 2294298 start_flags.go:382] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0530 20:50:49.178732 2294298 start_flags.go:897] Wait components to verify : map[apiserver:true system_pods:true]
	I0530 20:50:49.180732 2294298 out.go:169] Using Docker driver with root privileges
	I0530 20:50:49.182373 2294298 cni.go:84] Creating CNI manager for ""
	I0530 20:50:49.182387 2294298 cni.go:142] "docker" driver + "containerd" runtime found, recommending kindnet
	I0530 20:50:49.182397 2294298 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0530 20:50:49.182414 2294298 start_flags.go:319] config:
	{Name:download-only-942566 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-942566 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0530 20:50:49.184327 2294298 out.go:97] Starting control plane node download-only-942566 in cluster download-only-942566
	I0530 20:50:49.184369 2294298 cache.go:122] Beginning downloading kic base image for docker with containerd
	I0530 20:50:49.186131 2294298 out.go:97] Pulling base image ...
	I0530 20:50:49.186156 2294298 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0530 20:50:49.186331 2294298 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 in local docker daemon
	I0530 20:50:49.203302 2294298 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 to local cache
	I0530 20:50:49.204058 2294298 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 in local cache directory
	I0530 20:50:49.204171 2294298 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 to local cache
	I0530 20:50:49.263054 2294298 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4
	I0530 20:50:49.263079 2294298 cache.go:57] Caching tarball of preloaded images
	I0530 20:50:49.263812 2294298 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0530 20:50:49.266479 2294298 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0530 20:50:49.266503 2294298 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4 ...
	I0530 20:50:49.392902 2294298 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:1f1e2324dbd6e4f3d8734226d9194e9f -> /home/jenkins/minikube-integration/16597-2288886/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4
	I0530 20:50:54.476709 2294298 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 as a tarball
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-942566"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/json-events (9.75s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-942566 --force --alsologtostderr --kubernetes-version=v1.27.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-942566 --force --alsologtostderr --kubernetes-version=v1.27.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (9.747748145s)
--- PASS: TestDownloadOnly/v1.27.2/json-events (9.75s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/preload-exists
--- PASS: TestDownloadOnly/v1.27.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-942566
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-942566: exit status 85 (74.388242ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-942566 | jenkins | v1.30.1 | 30 May 23 20:50 UTC |          |
	|         | -p download-only-942566        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-942566 | jenkins | v1.30.1 | 30 May 23 20:51 UTC |          |
	|         | -p download-only-942566        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.27.2   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/05/30 20:51:03
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.20.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0530 20:51:03.037251 2294379 out.go:296] Setting OutFile to fd 1 ...
	I0530 20:51:03.037429 2294379 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 20:51:03.037438 2294379 out.go:309] Setting ErrFile to fd 2...
	I0530 20:51:03.037444 2294379 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 20:51:03.037606 2294379 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16597-2288886/.minikube/bin
	W0530 20:51:03.037724 2294379 root.go:312] Error reading config file at /home/jenkins/minikube-integration/16597-2288886/.minikube/config/config.json: open /home/jenkins/minikube-integration/16597-2288886/.minikube/config/config.json: no such file or directory
	I0530 20:51:03.037973 2294379 out.go:303] Setting JSON to true
	I0530 20:51:03.039024 2294379 start.go:125] hostinfo: {"hostname":"ip-172-31-31-251","uptime":174762,"bootTime":1685305101,"procs":316,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1036-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0530 20:51:03.039105 2294379 start.go:135] virtualization:  
	I0530 20:51:03.041591 2294379 out.go:97] [download-only-942566] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	I0530 20:51:03.043861 2294379 out.go:169] MINIKUBE_LOCATION=16597
	I0530 20:51:03.041886 2294379 notify.go:220] Checking for updates...
	I0530 20:51:03.047872 2294379 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0530 20:51:03.049651 2294379 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/16597-2288886/kubeconfig
	I0530 20:51:03.051558 2294379 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/16597-2288886/.minikube
	I0530 20:51:03.053692 2294379 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0530 20:51:03.057355 2294379 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0530 20:51:03.057894 2294379 config.go:182] Loaded profile config "download-only-942566": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	W0530 20:51:03.057960 2294379 start.go:778] api.Load failed for download-only-942566: filestore "download-only-942566": Docker machine "download-only-942566" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0530 20:51:03.058083 2294379 driver.go:375] Setting default libvirt URI to qemu:///system
	W0530 20:51:03.058110 2294379 start.go:778] api.Load failed for download-only-942566: filestore "download-only-942566": Docker machine "download-only-942566" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0530 20:51:03.082954 2294379 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0530 20:51:03.083057 2294379 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0530 20:51:03.169339 2294379 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-05-30 20:51:03.158936788 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1036-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0530 20:51:03.169443 2294379 docker.go:294] overlay module found
	I0530 20:51:03.171168 2294379 out.go:97] Using the docker driver based on existing profile
	I0530 20:51:03.171195 2294379 start.go:295] selected driver: docker
	I0530 20:51:03.171202 2294379 start.go:870] validating driver "docker" against &{Name:download-only-942566 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-942566 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP:}
	I0530 20:51:03.171395 2294379 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0530 20:51:03.234853 2294379 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-05-30 20:51:03.225141396 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1036-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0530 20:51:03.235192 2294379 cni.go:84] Creating CNI manager for ""
	I0530 20:51:03.235200 2294379 cni.go:142] "docker" driver + "containerd" runtime found, recommending kindnet
	I0530 20:51:03.235209 2294379 start_flags.go:319] config:
	{Name:download-only-942566 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:download-only-942566 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0530 20:51:03.237006 2294379 out.go:97] Starting control plane node download-only-942566 in cluster download-only-942566
	I0530 20:51:03.237067 2294379 cache.go:122] Beginning downloading kic base image for docker with containerd
	I0530 20:51:03.238650 2294379 out.go:97] Pulling base image ...
	I0530 20:51:03.238681 2294379 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime containerd
	I0530 20:51:03.238840 2294379 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 in local docker daemon
	I0530 20:51:03.261579 2294379 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 to local cache
	I0530 20:51:03.261691 2294379 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 in local cache directory
	I0530 20:51:03.261716 2294379 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 in local cache directory, skipping pull
	I0530 20:51:03.261726 2294379 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 exists in cache, skipping pull
	I0530 20:51:03.261734 2294379 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 as a tarball
	I0530 20:51:03.312696 2294379 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.2/preloaded-images-k8s-v18-v1.27.2-containerd-overlay2-arm64.tar.lz4
	I0530 20:51:03.312722 2294379 cache.go:57] Caching tarball of preloaded images
	I0530 20:51:03.312877 2294379 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime containerd
	I0530 20:51:03.314774 2294379 out.go:97] Downloading Kubernetes v1.27.2 preload ...
	I0530 20:51:03.314801 2294379 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.27.2-containerd-overlay2-arm64.tar.lz4 ...
	I0530 20:51:03.425884 2294379 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.2/preloaded-images-k8s-v18-v1.27.2-containerd-overlay2-arm64.tar.lz4?checksum=md5:f7a0ab28c8afe2dae72c45c225aaac8f -> /home/jenkins/minikube-integration/16597-2288886/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-containerd-overlay2-arm64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-942566"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.27.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-942566
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-812172 --alsologtostderr --binary-mirror http://127.0.0.1:42239 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-812172" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-812172
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestAddons/Setup (140.09s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-arm64 start -p addons-084881 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:88: (dbg) Done: out/minikube-linux-arm64 start -p addons-084881 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (2m20.090604109s)
--- PASS: TestAddons/Setup (140.09s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.62s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-knkf4" [80305b14-d91e-4624-88c3-a900af68d332] Running
2023/05/30 20:55:08 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/30 20:55:08 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2023/05/30 20:55:10 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/30 20:55:10 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.010912547s
addons_test.go:817: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-084881
2023/05/30 20:55:14 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/30 20:55:14 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
addons_test.go:817: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-084881: (5.605431754s)
--- PASS: TestAddons/parallel/InspektorGadget (10.62s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.62s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 5.463002ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-844d8db974-l29tb" [2f3ce57c-fd89-420e-863b-e5b166ccdb49] Running
2023/05/30 20:55:06 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.009499787s
addons_test.go:391: (dbg) Run:  kubectl --context addons-084881 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p addons-084881 addons disable metrics-server --alsologtostderr -v=1
2023/05/30 20:55:07 [DEBUG] GET http://192.168.49.2:5000
2023/05/30 20:55:07 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/30 20:55:07 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
--- PASS: TestAddons/parallel/MetricsServer (5.62s)

                                                
                                    
x
+
TestAddons/parallel/CSI (57.86s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 5.91483ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-084881 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084881 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084881 get pvc hpvc -o jsonpath={.status.phase} -n default
2023/05/30 20:54:06 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/30 20:54:06 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084881 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084881 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084881 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084881 get pvc hpvc -o jsonpath={.status.phase} -n default
2023/05/30 20:54:10 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/30 20:54:10 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084881 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084881 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084881 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084881 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084881 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084881 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084881 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084881 get pvc hpvc -o jsonpath={.status.phase} -n default
2023/05/30 20:54:18 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/30 20:54:19 [DEBUG] GET http://192.168.49.2:5000
2023/05/30 20:54:19 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/30 20:54:19 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084881 get pvc hpvc -o jsonpath={.status.phase} -n default
2023/05/30 20:54:20 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/30 20:54:20 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084881 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084881 get pvc hpvc -o jsonpath={.status.phase} -n default
2023/05/30 20:54:22 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/30 20:54:22 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084881 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084881 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-084881 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [1e2fb5df-432d-48fe-9c25-4c64e41b4916] Pending
helpers_test.go:344: "task-pv-pod" [1e2fb5df-432d-48fe-9c25-4c64e41b4916] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
2023/05/30 20:54:26 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/30 20:54:26 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
helpers_test.go:344: "task-pv-pod" [1e2fb5df-432d-48fe-9c25-4c64e41b4916] Running
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.00827604s
addons_test.go:560: (dbg) Run:  kubectl --context addons-084881 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-084881 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
2023/05/30 20:54:34 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
helpers_test.go:419: (dbg) Run:  kubectl --context addons-084881 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:570: (dbg) Run:  kubectl --context addons-084881 delete pod task-pv-pod
2023/05/30 20:54:35 [DEBUG] GET http://192.168.49.2:5000
2023/05/30 20:54:35 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/30 20:54:35 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
addons_test.go:570: (dbg) Done: kubectl --context addons-084881 delete pod task-pv-pod: (1.075847627s)
addons_test.go:576: (dbg) Run:  kubectl --context addons-084881 delete pvc hpvc
2023/05/30 20:54:36 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/30 20:54:36 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
addons_test.go:582: (dbg) Run:  kubectl --context addons-084881 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084881 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084881 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
2023/05/30 20:54:38 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/30 20:54:38 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084881 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084881 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084881 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084881 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
2023/05/30 20:54:42 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/30 20:54:42 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084881 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084881 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084881 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084881 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-084881 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:597: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [b6c37255-ef79-4b23-87be-91c9b53ae0da] Pending
helpers_test.go:344: "task-pv-pod-restore" [b6c37255-ef79-4b23-87be-91c9b53ae0da] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [b6c37255-ef79-4b23-87be-91c9b53ae0da] Running
2023/05/30 20:54:50 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/30 20:54:51 [DEBUG] GET http://192.168.49.2:5000
2023/05/30 20:54:51 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/30 20:54:51 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2023/05/30 20:54:52 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/30 20:54:52 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
addons_test.go:597: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.012188887s
addons_test.go:602: (dbg) Run:  kubectl --context addons-084881 delete pod task-pv-pod-restore
2023/05/30 20:54:54 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/30 20:54:54 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
addons_test.go:606: (dbg) Run:  kubectl --context addons-084881 delete pvc hpvc-restore
addons_test.go:610: (dbg) Run:  kubectl --context addons-084881 delete volumesnapshot new-snapshot-demo
addons_test.go:614: (dbg) Run:  out/minikube-linux-arm64 -p addons-084881 addons disable csi-hostpath-driver --alsologtostderr -v=1
2023/05/30 20:54:58 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/05/30 20:54:58 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
addons_test.go:614: (dbg) Done: out/minikube-linux-arm64 -p addons-084881 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.630465734s)
addons_test.go:618: (dbg) Run:  out/minikube-linux-arm64 -p addons-084881 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (57.86s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (24.59s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-084881 --alsologtostderr -v=1
addons_test.go:800: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-084881 --alsologtostderr -v=1: (1.575304252s)
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-6b5756787-jnx5p" [6e585f8e-298f-4cd6-99da-b691b3aede2a] Pending
helpers_test.go:344: "headlamp-6b5756787-jnx5p" [6e585f8e-298f-4cd6-99da-b691b3aede2a] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-6b5756787-jnx5p" [6e585f8e-298f-4cd6-99da-b691b3aede2a] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 23.018013327s
--- PASS: TestAddons/parallel/Headlamp (24.59s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.61s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6964794569-xwwd4" [455141cf-39a9-429b-b151-db43ab8fe3dd] Running
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.011598244s
addons_test.go:836: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-084881
--- PASS: TestAddons/parallel/CloudSpanner (5.61s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-084881 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-084881 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.32s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-084881
addons_test.go:148: (dbg) Done: out/minikube-linux-arm64 stop -p addons-084881: (12.086152112s)
addons_test.go:152: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-084881
addons_test.go:156: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-084881
addons_test.go:161: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-084881
--- PASS: TestAddons/StoppedEnableDisable (12.32s)

                                                
                                    
x
+
TestCertOptions (35.53s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-061620 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-061620 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (32.530881052s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-061620 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-061620 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-061620 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-061620" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-061620
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-061620: (2.275783254s)
--- PASS: TestCertOptions (35.53s)

                                                
                                    
x
+
TestCertExpiration (233.63s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-446324 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
E0530 21:35:44.070794 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/functional-812242/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-446324 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (31.345984117s)
E0530 21:38:34.569061 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/client.crt: no such file or directory
E0530 21:38:38.506210 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/client.crt: no such file or directory
E0530 21:38:47.115687 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/functional-812242/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-446324 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-446324 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (19.947815263s)
helpers_test.go:175: Cleaning up "cert-expiration-446324" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-446324
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-446324: (2.331718967s)
--- PASS: TestCertExpiration (233.63s)

                                                
                                    
x
+
TestForceSystemdFlag (38.26s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-612366 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:85: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-612366 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (35.876653198s)
docker_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-612366 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-612366" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-612366
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-612366: (2.082181816s)
--- PASS: TestForceSystemdFlag (38.26s)

                                                
                                    
x
+
TestForceSystemdEnv (35.98s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-123107 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:149: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-123107 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (33.61525075s)
docker_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-123107 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-123107" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-123107
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-123107: (2.062440322s)
--- PASS: TestForceSystemdEnv (35.98s)

                                                
                                    
x
+
TestErrorSpam/setup (30s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-867026 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-867026 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-867026 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-867026 --driver=docker  --container-runtime=containerd: (30.00082372s)
--- PASS: TestErrorSpam/setup (30.00s)

                                                
                                    
x
+
TestErrorSpam/start (0.81s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-867026 --log_dir /tmp/nospam-867026 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-867026 --log_dir /tmp/nospam-867026 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-867026 --log_dir /tmp/nospam-867026 start --dry-run
--- PASS: TestErrorSpam/start (0.81s)

                                                
                                    
x
+
TestErrorSpam/status (1.13s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-867026 --log_dir /tmp/nospam-867026 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-867026 --log_dir /tmp/nospam-867026 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-867026 --log_dir /tmp/nospam-867026 status
--- PASS: TestErrorSpam/status (1.13s)

                                                
                                    
x
+
TestErrorSpam/pause (1.85s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-867026 --log_dir /tmp/nospam-867026 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-867026 --log_dir /tmp/nospam-867026 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-867026 --log_dir /tmp/nospam-867026 pause
--- PASS: TestErrorSpam/pause (1.85s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.16s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-867026 --log_dir /tmp/nospam-867026 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-867026 --log_dir /tmp/nospam-867026 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-867026 --log_dir /tmp/nospam-867026 unpause
--- PASS: TestErrorSpam/unpause (2.16s)

                                                
                                    
x
+
TestErrorSpam/stop (1.47s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-867026 --log_dir /tmp/nospam-867026 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-867026 --log_dir /tmp/nospam-867026 stop: (1.250361426s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-867026 --log_dir /tmp/nospam-867026 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-867026 --log_dir /tmp/nospam-867026 stop
--- PASS: TestErrorSpam/stop (1.47s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1850: local sync path: /home/jenkins/minikube-integration/16597-2288886/.minikube/files/etc/test/nested/copy/2294292/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (64.38s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2229: (dbg) Run:  out/minikube-linux-arm64 start -p functional-812242 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E0530 20:58:34.568871 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/client.crt: no such file or directory
E0530 20:58:34.576490 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/client.crt: no such file or directory
E0530 20:58:34.586744 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/client.crt: no such file or directory
E0530 20:58:34.607040 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/client.crt: no such file or directory
E0530 20:58:34.647706 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/client.crt: no such file or directory
E0530 20:58:34.727962 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/client.crt: no such file or directory
E0530 20:58:34.888328 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/client.crt: no such file or directory
E0530 20:58:35.209030 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/client.crt: no such file or directory
E0530 20:58:35.849909 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/client.crt: no such file or directory
E0530 20:58:37.130366 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/client.crt: no such file or directory
E0530 20:58:39.691012 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/client.crt: no such file or directory
E0530 20:58:44.811620 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/client.crt: no such file or directory
E0530 20:58:55.051867 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/client.crt: no such file or directory
functional_test.go:2229: (dbg) Done: out/minikube-linux-arm64 start -p functional-812242 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m4.380881401s)
--- PASS: TestFunctional/serial/StartWithProxy (64.38s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (17.56s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:654: (dbg) Run:  out/minikube-linux-arm64 start -p functional-812242 --alsologtostderr -v=8
E0530 20:59:15.532116 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/client.crt: no such file or directory
functional_test.go:654: (dbg) Done: out/minikube-linux-arm64 start -p functional-812242 --alsologtostderr -v=8: (17.561113575s)
functional_test.go:658: soft start took 17.56158292s for "functional-812242" cluster.
--- PASS: TestFunctional/serial/SoftStart (17.56s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:676: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:691: (dbg) Run:  kubectl --context functional-812242 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.48s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1044: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 cache add registry.k8s.io/pause:3.1
functional_test.go:1044: (dbg) Done: out/minikube-linux-arm64 -p functional-812242 cache add registry.k8s.io/pause:3.1: (1.596383177s)
functional_test.go:1044: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 cache add registry.k8s.io/pause:3.3
functional_test.go:1044: (dbg) Done: out/minikube-linux-arm64 -p functional-812242 cache add registry.k8s.io/pause:3.3: (1.49617318s)
functional_test.go:1044: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 cache add registry.k8s.io/pause:latest
functional_test.go:1044: (dbg) Done: out/minikube-linux-arm64 -p functional-812242 cache add registry.k8s.io/pause:latest: (1.383935542s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.48s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1072: (dbg) Run:  docker build -t minikube-local-cache-test:functional-812242 /tmp/TestFunctionalserialCacheCmdcacheadd_local1244670664/001
functional_test.go:1084: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 cache add minikube-local-cache-test:functional-812242
functional_test.go:1084: (dbg) Done: out/minikube-linux-arm64 -p functional-812242 cache add minikube-local-cache-test:functional-812242: (1.004544139s)
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 cache delete minikube-local-cache-test:functional-812242
functional_test.go:1078: (dbg) Run:  docker rmi minikube-local-cache-test:functional-812242
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.48s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1097: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1105: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1119: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.38s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1142: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1148: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1148: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-812242 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (322.343192ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 cache reload
functional_test.go:1153: (dbg) Done: out/minikube-linux-arm64 -p functional-812242 cache reload: (1.392249368s)
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.38s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1167: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1167: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:711: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 kubectl -- --context functional-812242 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:736: (dbg) Run:  out/kubectl --context functional-812242 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (57s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:752: (dbg) Run:  out/minikube-linux-arm64 start -p functional-812242 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0530 20:59:56.493542 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/client.crt: no such file or directory
functional_test.go:752: (dbg) Done: out/minikube-linux-arm64 start -p functional-812242 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (56.995712777s)
functional_test.go:756: restart took 56.995808341s for "functional-812242" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (57.00s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:805: (dbg) Run:  kubectl --context functional-812242 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:820: etcd phase: Running
functional_test.go:830: etcd status: Ready
functional_test.go:820: kube-apiserver phase: Running
functional_test.go:830: kube-apiserver status: Ready
functional_test.go:820: kube-controller-manager phase: Running
functional_test.go:830: kube-controller-manager status: Ready
functional_test.go:820: kube-scheduler phase: Running
functional_test.go:830: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.94s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1231: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 logs
functional_test.go:1231: (dbg) Done: out/minikube-linux-arm64 -p functional-812242 logs: (1.939025618s)
--- PASS: TestFunctional/serial/LogsCmd (1.94s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.85s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1245: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 logs --file /tmp/TestFunctionalserialLogsFileCmd203551222/001/logs.txt
functional_test.go:1245: (dbg) Done: out/minikube-linux-arm64 -p functional-812242 logs --file /tmp/TestFunctionalserialLogsFileCmd203551222/001/logs.txt: (1.845760201s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1194: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 config unset cpus
functional_test.go:1194: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 config get cpus
functional_test.go:1194: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-812242 config get cpus: exit status 14 (71.113472ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1194: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 config set cpus 2
functional_test.go:1194: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 config get cpus
functional_test.go:1194: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 config unset cpus
functional_test.go:1194: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 config get cpus
functional_test.go:1194: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-812242 config get cpus: exit status 14 (72.888842ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:900: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-812242 --alsologtostderr -v=1]
functional_test.go:905: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-812242 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2322486: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.73s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:969: (dbg) Run:  out/minikube-linux-arm64 start -p functional-812242 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:969: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-812242 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (206.552234ms)

                                                
                                                
-- stdout --
	* [functional-812242] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16597-2288886/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16597-2288886/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0530 21:01:28.585755 2322118 out.go:296] Setting OutFile to fd 1 ...
	I0530 21:01:28.585961 2322118 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 21:01:28.585972 2322118 out.go:309] Setting ErrFile to fd 2...
	I0530 21:01:28.585978 2322118 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 21:01:28.586184 2322118 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16597-2288886/.minikube/bin
	I0530 21:01:28.586589 2322118 out.go:303] Setting JSON to false
	I0530 21:01:28.587765 2322118 start.go:125] hostinfo: {"hostname":"ip-172-31-31-251","uptime":175388,"bootTime":1685305101,"procs":300,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1036-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0530 21:01:28.587835 2322118 start.go:135] virtualization:  
	I0530 21:01:28.590994 2322118 out.go:177] * [functional-812242] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	I0530 21:01:28.594580 2322118 notify.go:220] Checking for updates...
	I0530 21:01:28.597607 2322118 out.go:177]   - MINIKUBE_LOCATION=16597
	I0530 21:01:28.599781 2322118 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0530 21:01:28.602306 2322118 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16597-2288886/kubeconfig
	I0530 21:01:28.604603 2322118 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16597-2288886/.minikube
	I0530 21:01:28.606902 2322118 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0530 21:01:28.609164 2322118 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0530 21:01:28.611925 2322118 config.go:182] Loaded profile config "functional-812242": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.2
	I0530 21:01:28.612585 2322118 driver.go:375] Setting default libvirt URI to qemu:///system
	I0530 21:01:28.639343 2322118 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0530 21:01:28.639441 2322118 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0530 21:01:28.721267 2322118 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-05-30 21:01:28.710983929 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1036-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0530 21:01:28.721412 2322118 docker.go:294] overlay module found
	I0530 21:01:28.723628 2322118 out.go:177] * Using the docker driver based on existing profile
	I0530 21:01:28.725494 2322118 start.go:295] selected driver: docker
	I0530 21:01:28.725515 2322118 start.go:870] validating driver "docker" against &{Name:functional-812242 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:functional-812242 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.27.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0530 21:01:28.725612 2322118 start.go:881] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0530 21:01:28.728051 2322118 out.go:177] 
	W0530 21:01:28.729999 2322118 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0530 21:01:28.732107 2322118 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:986: (dbg) Run:  out/minikube-linux-arm64 start -p functional-812242 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1015: (dbg) Run:  out/minikube-linux-arm64 start -p functional-812242 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1015: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-812242 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (210.135843ms)

                                                
                                                
-- stdout --
	* [functional-812242] minikube v1.30.1 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16597-2288886/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16597-2288886/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0530 21:01:29.059702 2322224 out.go:296] Setting OutFile to fd 1 ...
	I0530 21:01:29.059943 2322224 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 21:01:29.059969 2322224 out.go:309] Setting ErrFile to fd 2...
	I0530 21:01:29.059989 2322224 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 21:01:29.060254 2322224 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16597-2288886/.minikube/bin
	I0530 21:01:29.060746 2322224 out.go:303] Setting JSON to false
	I0530 21:01:29.061956 2322224 start.go:125] hostinfo: {"hostname":"ip-172-31-31-251","uptime":175388,"bootTime":1685305101,"procs":300,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1036-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0530 21:01:29.062056 2322224 start.go:135] virtualization:  
	I0530 21:01:29.064888 2322224 out.go:177] * [functional-812242] minikube v1.30.1 sur Ubuntu 20.04 (arm64)
	I0530 21:01:29.068451 2322224 out.go:177]   - MINIKUBE_LOCATION=16597
	I0530 21:01:29.068639 2322224 notify.go:220] Checking for updates...
	I0530 21:01:29.074009 2322224 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0530 21:01:29.076228 2322224 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16597-2288886/kubeconfig
	I0530 21:01:29.078436 2322224 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16597-2288886/.minikube
	I0530 21:01:29.080536 2322224 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0530 21:01:29.083188 2322224 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0530 21:01:29.086186 2322224 config.go:182] Loaded profile config "functional-812242": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.2
	I0530 21:01:29.086666 2322224 driver.go:375] Setting default libvirt URI to qemu:///system
	I0530 21:01:29.111575 2322224 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0530 21:01:29.111685 2322224 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0530 21:01:29.199513 2322224 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-05-30 21:01:29.189629395 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1036-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0530 21:01:29.199621 2322224 docker.go:294] overlay module found
	I0530 21:01:29.203747 2322224 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0530 21:01:29.205864 2322224 start.go:295] selected driver: docker
	I0530 21:01:29.205885 2322224 start.go:870] validating driver "docker" against &{Name:functional-812242 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:functional-812242 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.27.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0530 21:01:29.205992 2322224 start.go:881] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0530 21:01:29.208684 2322224 out.go:177] 
	W0530 21:01:29.210902 2322224 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0530 21:01:29.213185 2322224 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:849: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 status
functional_test.go:855: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:867: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (6.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-812242 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1633: (dbg) Run:  kubectl --context functional-812242 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1638: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58d66798bb-fkrcz" [1b7f6a1a-ed97-4189-b993-164ec0459b82] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-58d66798bb-fkrcz" [1b7f6a1a-ed97-4189-b993-164ec0459b82] Running
functional_test.go:1638: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 6.01526372s
functional_test.go:1647: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 service hello-node-connect --url
functional_test.go:1653: found endpoint for hello-node-connect: http://192.168.49.2:31526
functional_test.go:1673: http://192.168.49.2:31526: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58d66798bb-fkrcz

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31526
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (6.72s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1688: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 addons list
functional_test.go:1700: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (39.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [9725b36a-48c7-43f4-8245-478851cae8d7] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.009062634s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-812242 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-812242 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-812242 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-812242 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [b4f0ad22-3932-42f0-95cb-a908574f1c59] Pending
helpers_test.go:344: "sp-pod" [b4f0ad22-3932-42f0-95cb-a908574f1c59] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [b4f0ad22-3932-42f0-95cb-a908574f1c59] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.012569393s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-812242 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-812242 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-812242 delete -f testdata/storage-provisioner/pod.yaml: (1.279549978s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-812242 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [77bc530f-c3d3-4adf-87de-89eeffa2a041] Pending
helpers_test.go:344: "sp-pod" [77bc530f-c3d3-4adf-87de-89eeffa2a041] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [77bc530f-c3d3-4adf-87de-89eeffa2a041] Running
2023/05/30 21:01:37 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 22.016420815s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-812242 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (39.83s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1723: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 ssh "echo hello"
functional_test.go:1740: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 ssh -n functional-812242 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 cp functional-812242:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2612540320/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 ssh -n functional-812242 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1924: Checking for existence of /etc/test/nested/copy/2294292/hosts within VM
functional_test.go:1926: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 ssh "sudo cat /etc/test/nested/copy/2294292/hosts"
functional_test.go:1931: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1967: Checking for existence of /etc/ssl/certs/2294292.pem within VM
functional_test.go:1968: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 ssh "sudo cat /etc/ssl/certs/2294292.pem"
functional_test.go:1967: Checking for existence of /usr/share/ca-certificates/2294292.pem within VM
functional_test.go:1968: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 ssh "sudo cat /usr/share/ca-certificates/2294292.pem"
functional_test.go:1967: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1968: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1994: Checking for existence of /etc/ssl/certs/22942922.pem within VM
functional_test.go:1995: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 ssh "sudo cat /etc/ssl/certs/22942922.pem"
functional_test.go:1994: Checking for existence of /usr/share/ca-certificates/22942922.pem within VM
functional_test.go:1995: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 ssh "sudo cat /usr/share/ca-certificates/22942922.pem"
functional_test.go:1994: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1995: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.17s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:217: (dbg) Run:  kubectl --context functional-812242 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2022: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 ssh "sudo systemctl is-active docker"
functional_test.go:2022: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-812242 ssh "sudo systemctl is-active docker": exit status 1 (394.846253ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2022: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 ssh "sudo systemctl is-active crio"
functional_test.go:2022: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-812242 ssh "sudo systemctl is-active crio": exit status 1 (380.189309ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2283: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2251: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2265: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 version -o=json --components
functional_test.go:2265: (dbg) Done: out/minikube-linux-arm64 -p functional-812242 version -o=json --components: (1.571493742s)
--- PASS: TestFunctional/parallel/Version/components (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:259: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 image ls --format short --alsologtostderr
functional_test.go:264: (dbg) Stdout: out/minikube-linux-arm64 -p functional-812242 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.27.2
registry.k8s.io/kube-proxy:v1.27.2
registry.k8s.io/kube-controller-manager:v1.27.2
registry.k8s.io/kube-apiserver:v1.27.2
registry.k8s.io/etcd:3.5.7-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-812242
docker.io/kindest/kindnetd:v20230511-dc714da8
functional_test.go:267: (dbg) Stderr: out/minikube-linux-arm64 -p functional-812242 image ls --format short --alsologtostderr:
I0530 21:01:38.992916 2322915 out.go:296] Setting OutFile to fd 1 ...
I0530 21:01:38.993087 2322915 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0530 21:01:38.993091 2322915 out.go:309] Setting ErrFile to fd 2...
I0530 21:01:38.993097 2322915 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0530 21:01:38.993326 2322915 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16597-2288886/.minikube/bin
I0530 21:01:38.994723 2322915 config.go:182] Loaded profile config "functional-812242": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.2
I0530 21:01:38.995019 2322915 config.go:182] Loaded profile config "functional-812242": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.2
I0530 21:01:38.995997 2322915 cli_runner.go:164] Run: docker container inspect functional-812242 --format={{.State.Status}}
I0530 21:01:39.019950 2322915 ssh_runner.go:195] Run: systemctl --version
I0530 21:01:39.020006 2322915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-812242
I0530 21:01:39.040714 2322915 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40956 SSHKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/machines/functional-812242/id_rsa Username:docker}
I0530 21:01:39.132579 2322915 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:259: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 image ls --format table --alsologtostderr
functional_test.go:264: (dbg) Stdout: out/minikube-linux-arm64 -p functional-812242 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd                  | v20230511-dc714da8 | sha256:b18bf7 | 25.3MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/kube-controller-manager     | v1.27.2            | sha256:2ee705 | 28.2MB |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| docker.io/library/minikube-local-cache-test | functional-812242  | sha256:87d567 | 1.01kB |
| docker.io/library/nginx                     | alpine             | sha256:5ee47d | 16.4MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/coredns/coredns             | v1.10.1            | sha256:97e046 | 14.6MB |
| registry.k8s.io/kube-apiserver              | v1.27.2            | sha256:72c9df | 30.4MB |
| registry.k8s.io/kube-proxy                  | v1.27.2            | sha256:29921a | 21.4MB |
| docker.io/library/nginx                     | latest             | sha256:c42efe | 55.8MB |
| registry.k8s.io/etcd                        | 3.5.7-0            | sha256:24bc64 | 80.7MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/kube-scheduler              | v1.27.2            | sha256:305d7e | 16.5MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| registry.k8s.io/pause                       | 3.9                | sha256:829e9d | 268kB  |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:267: (dbg) Stderr: out/minikube-linux-arm64 -p functional-812242 image ls --format table --alsologtostderr:
I0530 21:01:39.992643 2323099 out.go:296] Setting OutFile to fd 1 ...
I0530 21:01:39.992918 2323099 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0530 21:01:39.992940 2323099 out.go:309] Setting ErrFile to fd 2...
I0530 21:01:39.992959 2323099 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0530 21:01:39.993148 2323099 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16597-2288886/.minikube/bin
I0530 21:01:39.994184 2323099 config.go:182] Loaded profile config "functional-812242": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.2
I0530 21:01:39.994469 2323099 config.go:182] Loaded profile config "functional-812242": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.2
I0530 21:01:39.994971 2323099 cli_runner.go:164] Run: docker container inspect functional-812242 --format={{.State.Status}}
I0530 21:01:40.026020 2323099 ssh_runner.go:195] Run: systemctl --version
I0530 21:01:40.026078 2323099 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-812242
I0530 21:01:40.046155 2323099 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40956 SSHKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/machines/functional-812242/id_rsa Username:docker}
I0530 21:01:40.149237 2323099 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:259: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 image ls --format json --alsologtostderr
functional_test.go:264: (dbg) Stdout: out/minikube-linux-arm64 -p functional-812242 image ls --format json --alsologtostderr:
[{"id":"sha256:2ee705380c3c59a538b853cbe9ae9886ebbd0001a4cea4add5adeea48e5f48d4","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:b0990ef7c9ce9edd0f57355a7e4cb43a71e864bfd2cd55bc68e4998e00213b56"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.27.2"],"size":"28213131"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952
adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"14557471"},{"id":"sha256:24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737","repoDigests":["registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83"],"repoTags":["registry.k8s.io/etcd:3.5.7-0"],"size":"80665728"},{"id":"sha256:5ee47dcca7543750b3941b52e98f103bbbae9aaf574ab4dc018e1e7d34e505ad","repoDigests":["docker.io/library/nginx@sha256:2e776a66a3556f001aba13431b26e448fe8acba277bf93d2ab1a785571a46d90"],"repoTags":["docker.io/library/nginx:alpine"],"size":"16367707"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha2
56:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:29921a084542255eb81a1a660a603b1a24636d88b202f9010daae75fa32754c0","repoDigests":["registry.k8s.io/kube-proxy@sha256:1e4f13f5f5c215813fb9c9c6f56da1c0354363f2a69bd12732658f79d585864f"],"repoTags":["registry.k8s.io/kube-proxy:v1.27.2"],"size":"21369669"},{"id":"sha256:305d7ed1dae2877c3a80d434c5fb9f1aac1aa3d2431c36130a3fcd1970e93840","repoDigests":["registry.k8s.io/kube-scheduler@sha256:89e8c591cc58bd0e8651dddee3de290399b1ae5ad14779afe84779083fe05177"],"repoTags":["registry.k8s.io/kube-scheduler:v1.27.2"],"size":"16545689"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:87d567e36d72622e0ff52c6d18901453394e1e24a07db92c6fba4cca696cb863","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-812242"],"size":
"1006"},{"id":"sha256:c42efe0b54387756e68d167a437aef21451f63eebd9330bb555367d67128386c","repoDigests":["docker.io/library/nginx@sha256:af296b188c7b7df99ba960ca614439c99cb7cf252ed7bbc23e90cfda59092305"],"repoTags":["docker.io/library/nginx:latest"],"size":"55764037"},{"id":"sha256:72c9df6be7f1b997e4a31b5cb9aa7262e5278905af97e6a69e341e3f0f9bbaae","repoDigests":["registry.k8s.io/kube-apiserver@sha256:94e48585629fde3c1d06c6ae8f62885d3052f12a1072ffd97611296525eff5b9"],"repoTags":["registry.k8s.io/kube-apiserver:v1.27.2"],"size":"30386736"},{"id":"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"268051"},{"id":"sha256:b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79","repoDigests":["docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974"],"repoTags":["docker.io/kindest/kindn
etd:v20230511-dc714da8"],"size":"25334607"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"}]
functional_test.go:267: (dbg) Stderr: out/minikube-linux-arm64 -p functional-812242 image ls --format json --alsologtostderr:
I0530 21:01:39.243012 2322942 out.go:296] Setting OutFile to fd 1 ...
I0530 21:01:39.243598 2322942 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0530 21:01:39.243672 2322942 out.go:309] Setting ErrFile to fd 2...
I0530 21:01:39.243694 2322942 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0530 21:01:39.243979 2322942 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16597-2288886/.minikube/bin
I0530 21:01:39.245115 2322942 config.go:182] Loaded profile config "functional-812242": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.2
I0530 21:01:39.245379 2322942 config.go:182] Loaded profile config "functional-812242": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.2
I0530 21:01:39.246848 2322942 cli_runner.go:164] Run: docker container inspect functional-812242 --format={{.State.Status}}
I0530 21:01:39.270873 2322942 ssh_runner.go:195] Run: systemctl --version
I0530 21:01:39.270944 2322942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-812242
I0530 21:01:39.290805 2322942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40956 SSHKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/machines/functional-812242/id_rsa Username:docker}
I0530 21:01:39.388964 2322942 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:259: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 image ls --format yaml --alsologtostderr
functional_test.go:264: (dbg) Stdout: out/minikube-linux-arm64 -p functional-812242 image ls --format yaml --alsologtostderr:
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:5ee47dcca7543750b3941b52e98f103bbbae9aaf574ab4dc018e1e7d34e505ad
repoDigests:
- docker.io/library/nginx@sha256:2e776a66a3556f001aba13431b26e448fe8acba277bf93d2ab1a785571a46d90
repoTags:
- docker.io/library/nginx:alpine
size: "16367707"
- id: sha256:c42efe0b54387756e68d167a437aef21451f63eebd9330bb555367d67128386c
repoDigests:
- docker.io/library/nginx@sha256:af296b188c7b7df99ba960ca614439c99cb7cf252ed7bbc23e90cfda59092305
repoTags:
- docker.io/library/nginx:latest
size: "55764037"
- id: sha256:305d7ed1dae2877c3a80d434c5fb9f1aac1aa3d2431c36130a3fcd1970e93840
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:89e8c591cc58bd0e8651dddee3de290399b1ae5ad14779afe84779083fe05177
repoTags:
- registry.k8s.io/kube-scheduler:v1.27.2
size: "16545689"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79
repoDigests:
- docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974
repoTags:
- docker.io/kindest/kindnetd:v20230511-dc714da8
size: "25334607"
- id: sha256:87d567e36d72622e0ff52c6d18901453394e1e24a07db92c6fba4cca696cb863
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-812242
size: "1006"
- id: sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "14557471"
- id: sha256:29921a084542255eb81a1a660a603b1a24636d88b202f9010daae75fa32754c0
repoDigests:
- registry.k8s.io/kube-proxy@sha256:1e4f13f5f5c215813fb9c9c6f56da1c0354363f2a69bd12732658f79d585864f
repoTags:
- registry.k8s.io/kube-proxy:v1.27.2
size: "21369669"
- id: sha256:72c9df6be7f1b997e4a31b5cb9aa7262e5278905af97e6a69e341e3f0f9bbaae
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:94e48585629fde3c1d06c6ae8f62885d3052f12a1072ffd97611296525eff5b9
repoTags:
- registry.k8s.io/kube-apiserver:v1.27.2
size: "30386736"
- id: sha256:2ee705380c3c59a538b853cbe9ae9886ebbd0001a4cea4add5adeea48e5f48d4
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:b0990ef7c9ce9edd0f57355a7e4cb43a71e864bfd2cd55bc68e4998e00213b56
repoTags:
- registry.k8s.io/kube-controller-manager:v1.27.2
size: "28213131"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "268051"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737
repoDigests:
- registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83
repoTags:
- registry.k8s.io/etcd:3.5.7-0
size: "80665728"

                                                
                                                
functional_test.go:267: (dbg) Stderr: out/minikube-linux-arm64 -p functional-812242 image ls --format yaml --alsologtostderr:
I0530 21:01:39.682685 2323028 out.go:296] Setting OutFile to fd 1 ...
I0530 21:01:39.682953 2323028 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0530 21:01:39.682982 2323028 out.go:309] Setting ErrFile to fd 2...
I0530 21:01:39.683003 2323028 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0530 21:01:39.683589 2323028 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16597-2288886/.minikube/bin
I0530 21:01:39.684284 2323028 config.go:182] Loaded profile config "functional-812242": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.2
I0530 21:01:39.684470 2323028 config.go:182] Loaded profile config "functional-812242": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.2
I0530 21:01:39.684975 2323028 cli_runner.go:164] Run: docker container inspect functional-812242 --format={{.State.Status}}
I0530 21:01:39.714560 2323028 ssh_runner.go:195] Run: systemctl --version
I0530 21:01:39.714640 2323028 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-812242
I0530 21:01:39.743278 2323028 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40956 SSHKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/machines/functional-812242/id_rsa Username:docker}
I0530 21:01:39.845003 2323028 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:306: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 ssh pgrep buildkitd
functional_test.go:306: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-812242 ssh pgrep buildkitd: exit status 1 (383.806096ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 image build -t localhost/my-image:functional-812242 testdata/build --alsologtostderr
functional_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p functional-812242 image build -t localhost/my-image:functional-812242 testdata/build --alsologtostderr: (3.027633701s)
functional_test.go:321: (dbg) Stderr: out/minikube-linux-arm64 -p functional-812242 image build -t localhost/my-image:functional-812242 testdata/build --alsologtostderr:
I0530 21:01:39.901964 2323080 out.go:296] Setting OutFile to fd 1 ...
I0530 21:01:39.903660 2323080 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0530 21:01:39.903715 2323080 out.go:309] Setting ErrFile to fd 2...
I0530 21:01:39.903736 2323080 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0530 21:01:39.904948 2323080 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16597-2288886/.minikube/bin
I0530 21:01:39.905827 2323080 config.go:182] Loaded profile config "functional-812242": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.2
I0530 21:01:39.906531 2323080 config.go:182] Loaded profile config "functional-812242": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.2
I0530 21:01:39.907253 2323080 cli_runner.go:164] Run: docker container inspect functional-812242 --format={{.State.Status}}
I0530 21:01:39.940089 2323080 ssh_runner.go:195] Run: systemctl --version
I0530 21:01:39.940141 2323080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-812242
I0530 21:01:39.968815 2323080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40956 SSHKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/machines/functional-812242/id_rsa Username:docker}
I0530 21:01:40.064096 2323080 build_images.go:151] Building image from path: /tmp/build.3393141088.tar
I0530 21:01:40.064171 2323080 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0530 21:01:40.083885 2323080 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3393141088.tar
I0530 21:01:40.092802 2323080 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3393141088.tar: stat -c "%s %y" /var/lib/minikube/build/build.3393141088.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3393141088.tar': No such file or directory
I0530 21:01:40.092871 2323080 ssh_runner.go:362] scp /tmp/build.3393141088.tar --> /var/lib/minikube/build/build.3393141088.tar (3072 bytes)
I0530 21:01:40.129587 2323080 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3393141088
I0530 21:01:40.142513 2323080 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3393141088 -xf /var/lib/minikube/build/build.3393141088.tar
I0530 21:01:40.155999 2323080 containerd.go:378] Building image: /var/lib/minikube/build/build.3393141088
I0530 21:01:40.156070 2323080 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3393141088 --local dockerfile=/var/lib/minikube/build/build.3393141088 --output type=image,name=localhost/my-image:functional-812242
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load .dockerignore
#2 transferring context: 2B done
#2 DONE 0.1s

                                                
                                                
#3 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#3 DONE 0.8s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 1.0s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.1s done
#8 exporting manifest sha256:1d8bd4294503f54766975761ac514ac8f4ce00a9cb509b1b249a6dd9540c8863 0.0s done
#8 exporting config sha256:88a54568e34e0cc2cb4c889c7a06b527393dc69978c6db2f4c17c866cd09e425 0.0s done
#8 naming to localhost/my-image:functional-812242 done
#8 DONE 0.1s
I0530 21:01:42.822800 2323080 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3393141088 --local dockerfile=/var/lib/minikube/build/build.3393141088 --output type=image,name=localhost/my-image:functional-812242: (2.666699783s)
I0530 21:01:42.822867 2323080 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3393141088
I0530 21:01:42.837462 2323080 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3393141088.tar
I0530 21:01:42.849759 2323080 build_images.go:207] Built localhost/my-image:functional-812242 from /tmp/build.3393141088.tar
I0530 21:01:42.849797 2323080 build_images.go:123] succeeded building to: functional-812242
I0530 21:01:42.849803 2323080 build_images.go:124] failed building to: 
functional_test.go:446: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:340: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:340: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.936322036s)
functional_test.go:345: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-812242
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.96s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2114: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2114: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2114: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-812242 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1443: (dbg) Run:  kubectl --context functional-812242 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1448: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-7b684b55f9-xcj44" [12c1b429-5e19-408f-af95-cb5bbe9e76d6] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-7b684b55f9-xcj44" [12c1b429-5e19-408f-af95-cb5bbe9e76d6] Running
functional_test.go:1448: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.059084642s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1457: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1487: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 service list -o json
functional_test.go:1492: Took "451.793995ms" to run "out/minikube-linux-arm64 -p functional-812242 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1507: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 service --namespace=default --https --url hello-node
functional_test.go:1520: found endpoint: https://192.168.49.2:30194
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1538: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 image rm gcr.io/google-containers/addon-resizer:functional-812242 --alsologtostderr
functional_test.go:446: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1557: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 service hello-node --url
functional_test.go:1563: found endpoint for hello-node: http://192.168.49.2:30194
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:417: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-812242
functional_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 image save --daemon gcr.io/google-containers/addon-resizer:functional-812242 --alsologtostderr
functional_test.go:427: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-812242
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-812242 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-812242 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-812242 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-812242 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2319609: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-812242 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-812242 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [3fc6f464-780c-4f61-94f1-b31205eb0689] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [3fc6f464-780c-4f61-94f1-b31205eb0689] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.013879412s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.47s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-812242 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.109.145.206 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-812242 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1268: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1273: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1308: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1313: Took "361.485542ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1322: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1327: Took "63.447625ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1359: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1364: Took "364.193573ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1372: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1377: Took "62.937648ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-812242 /tmp/TestFunctionalparallelMountCmdany-port3935298908/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1685480477227252117" to /tmp/TestFunctionalparallelMountCmdany-port3935298908/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1685480477227252117" to /tmp/TestFunctionalparallelMountCmdany-port3935298908/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1685480477227252117" to /tmp/TestFunctionalparallelMountCmdany-port3935298908/001/test-1685480477227252117
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-812242 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (379.481142ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 ssh -- ls -la /mount-9p
E0530 21:01:18.413735 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/client.crt: no such file or directory
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 May 30 21:01 created-by-test
-rw-r--r-- 1 docker docker 24 May 30 21:01 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 May 30 21:01 test-1685480477227252117
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 ssh cat /mount-9p/test-1685480477227252117
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-812242 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [b7c63401-4016-413d-9850-9277fb64488c] Pending
helpers_test.go:344: "busybox-mount" [b7c63401-4016-413d-9850-9277fb64488c] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [b7c63401-4016-413d-9850-9277fb64488c] Running
helpers_test.go:344: "busybox-mount" [b7c63401-4016-413d-9850-9277fb64488c] Running: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [b7c63401-4016-413d-9850-9277fb64488c] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.009968808s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-812242 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-812242 /tmp/TestFunctionalparallelMountCmdany-port3935298908/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.97s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-812242 /tmp/TestFunctionalparallelMountCmdspecific-port1567102375/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-812242 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (456.994127ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-812242 /tmp/TestFunctionalparallelMountCmdspecific-port1567102375/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-812242 ssh "sudo umount -f /mount-9p": exit status 1 (298.484339ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-812242 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-812242 /tmp/TestFunctionalparallelMountCmdspecific-port1567102375/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.24s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-812242 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3185556395/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-812242 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3185556395/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-812242 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3185556395/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-812242 ssh "findmnt -T" /mount1: exit status 1 (757.479983ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-812242 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-812242 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-812242 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3185556395/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-812242 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3185556395/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-812242 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3185556395/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.10s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:188: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:188: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-812242
--- PASS: TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:196: (dbg) Run:  docker rmi -f localhost/my-image:functional-812242
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:204: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-812242
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (102.22s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-arm64 start -p ingress-addon-legacy-208395 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-arm64 start -p ingress-addon-legacy-208395 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (1m42.224022069s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (102.22s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (9.81s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-208395 addons enable ingress --alsologtostderr -v=5
E0530 21:03:34.569228 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-208395 addons enable ingress --alsologtostderr -v=5: (9.813211853s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (9.81s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.45s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-208395 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.45s)

                                                
                                    
x
+
TestJSONOutput/start/Command (65.19s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-212083 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0530 21:05:44.070930 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/functional-812242/client.crt: no such file or directory
E0530 21:05:44.076291 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/functional-812242/client.crt: no such file or directory
E0530 21:05:44.086603 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/functional-812242/client.crt: no such file or directory
E0530 21:05:44.106984 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/functional-812242/client.crt: no such file or directory
E0530 21:05:44.147264 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/functional-812242/client.crt: no such file or directory
E0530 21:05:44.227543 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/functional-812242/client.crt: no such file or directory
E0530 21:05:44.387934 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/functional-812242/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-212083 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (1m5.185479967s)
--- PASS: TestJSONOutput/start/Command (65.19s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.84s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-212083 --output=json --user=testUser
E0530 21:05:44.708503 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/functional-812242/client.crt: no such file or directory
--- PASS: TestJSONOutput/pause/Command (0.84s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.77s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-212083 --output=json --user=testUser
E0530 21:05:45.348726 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/functional-812242/client.crt: no such file or directory
--- PASS: TestJSONOutput/unpause/Command (0.77s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.88s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-212083 --output=json --user=testUser
E0530 21:05:46.628951 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/functional-812242/client.crt: no such file or directory
E0530 21:05:49.189118 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/functional-812242/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-212083 --output=json --user=testUser: (5.878107183s)
--- PASS: TestJSONOutput/stop/Command (5.88s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-290678 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-290678 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (86.321136ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"95f021ee-35f4-491e-bdd8-1638a1a56fbd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-290678] minikube v1.30.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f997354e-b09c-4ab5-a08d-ec844aed1a16","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16597"}}
	{"specversion":"1.0","id":"8504b016-97c0-4749-a0f8-96fb22d89ea9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"bf4e7a4e-1caa-4b29-b5a7-7fc2f82f8578","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/16597-2288886/kubeconfig"}}
	{"specversion":"1.0","id":"40137bd4-5ee5-4c95-83f0-35c72c5ff2d5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/16597-2288886/.minikube"}}
	{"specversion":"1.0","id":"e0742b0a-39b2-4bce-ae23-86147cc5f247","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"eb916970-d3c8-471b-8b6f-4c77d31fdc4b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"bd355a9c-26d7-4700-a07d-6ec0fd2e06dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-290678" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-290678
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (40.77s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-742281 --network=
E0530 21:06:04.550446 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/functional-812242/client.crt: no such file or directory
E0530 21:06:25.030582 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/functional-812242/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-742281 --network=: (38.639629226s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-742281" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-742281
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-742281: (2.097562718s)
--- PASS: TestKicCustomNetwork/create_custom_network (40.77s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (38.32s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-620543 --network=bridge
E0530 21:07:05.990797 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/functional-812242/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-620543 --network=bridge: (36.213010333s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-620543" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-620543
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-620543: (2.077243043s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (38.32s)

                                                
                                    
x
+
TestKicExistingNetwork (36.61s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-830489 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-830489 --network=existing-network: (34.472744911s)
helpers_test.go:175: Cleaning up "existing-network-830489" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-830489
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-830489: (1.959618149s)
--- PASS: TestKicExistingNetwork (36.61s)

                                                
                                    
x
+
TestKicCustomSubnet (35.41s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-936571 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-936571 --subnet=192.168.60.0/24: (33.211816508s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-936571 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-936571" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-936571
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-936571: (2.167014217s)
--- PASS: TestKicCustomSubnet (35.41s)

                                                
                                    
x
+
TestKicStaticIP (34.1s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-463665 --static-ip=192.168.200.200
E0530 21:08:27.911869 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/functional-812242/client.crt: no such file or directory
E0530 21:08:34.569415 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/client.crt: no such file or directory
E0530 21:08:38.505982 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/client.crt: no such file or directory
E0530 21:08:38.511709 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/client.crt: no such file or directory
E0530 21:08:38.521943 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/client.crt: no such file or directory
E0530 21:08:38.542194 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/client.crt: no such file or directory
E0530 21:08:38.582464 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/client.crt: no such file or directory
E0530 21:08:38.662742 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/client.crt: no such file or directory
E0530 21:08:38.823096 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/client.crt: no such file or directory
E0530 21:08:39.143695 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/client.crt: no such file or directory
E0530 21:08:39.783885 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/client.crt: no such file or directory
E0530 21:08:41.064117 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/client.crt: no such file or directory
E0530 21:08:43.624304 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/client.crt: no such file or directory
E0530 21:08:48.744812 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/client.crt: no such file or directory
E0530 21:08:58.985182 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-463665 --static-ip=192.168.200.200: (32.137069958s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-463665 ip
helpers_test.go:175: Cleaning up "static-ip-463665" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-463665
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-463665: (1.78919615s)
--- PASS: TestKicStaticIP (34.10s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (71.47s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-027874 --driver=docker  --container-runtime=containerd
E0530 21:09:19.466194 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-027874 --driver=docker  --container-runtime=containerd: (34.907105811s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-031181 --driver=docker  --container-runtime=containerd
E0530 21:10:00.426645 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-031181 --driver=docker  --container-runtime=containerd: (31.331411781s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-027874
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-031181
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-031181" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-031181
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-031181: (1.99684928s)
helpers_test.go:175: Cleaning up "first-027874" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-027874
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-027874: (1.977869992s)
--- PASS: TestMinikubeProfile (71.47s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.47s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-573815 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-573815 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.470058632s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.47s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-573815 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.78s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-575817 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-575817 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.776803617s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.78s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-575817 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.73s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-573815 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-573815 --alsologtostderr -v=5: (1.725394074s)
--- PASS: TestMountStart/serial/DeleteFirst (1.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-575817 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-575817
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-575817: (1.24420298s)
--- PASS: TestMountStart/serial/Stop (1.24s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.41s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-575817
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-575817: (6.409075621s)
--- PASS: TestMountStart/serial/RestartStopped (7.41s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-575817 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (89.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-124392 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0530 21:10:44.070716 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/functional-812242/client.crt: no such file or directory
E0530 21:11:11.752192 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/functional-812242/client.crt: no such file or directory
E0530 21:11:22.347412 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-arm64 start -p multinode-124392 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m28.884432388s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-arm64 -p multinode-124392 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (89.45s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-124392 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-124392 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-124392 -- rollout status deployment/busybox: (2.909978934s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-124392 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-124392 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-124392 -- exec busybox-67b7f59bb-pxrqx -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-124392 -- exec busybox-67b7f59bb-sbglt -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-124392 -- exec busybox-67b7f59bb-pxrqx -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-124392 -- exec busybox-67b7f59bb-sbglt -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-124392 -- exec busybox-67b7f59bb-pxrqx -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-124392 -- exec busybox-67b7f59bb-sbglt -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.13s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-124392 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-124392 -- exec busybox-67b7f59bb-pxrqx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-124392 -- exec busybox-67b7f59bb-pxrqx -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-124392 -- exec busybox-67b7f59bb-sbglt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-124392 -- exec busybox-67b7f59bb-sbglt -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.15s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (21.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-124392 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-124392 -v 3 --alsologtostderr: (20.789474753s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p multinode-124392 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (21.54s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-arm64 -p multinode-124392 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-124392 cp testdata/cp-test.txt multinode-124392:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-124392 ssh -n multinode-124392 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-124392 cp multinode-124392:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1350473294/001/cp-test_multinode-124392.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-124392 ssh -n multinode-124392 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-124392 cp multinode-124392:/home/docker/cp-test.txt multinode-124392-m02:/home/docker/cp-test_multinode-124392_multinode-124392-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-124392 ssh -n multinode-124392 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-124392 ssh -n multinode-124392-m02 "sudo cat /home/docker/cp-test_multinode-124392_multinode-124392-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-124392 cp multinode-124392:/home/docker/cp-test.txt multinode-124392-m03:/home/docker/cp-test_multinode-124392_multinode-124392-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-124392 ssh -n multinode-124392 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-124392 ssh -n multinode-124392-m03 "sudo cat /home/docker/cp-test_multinode-124392_multinode-124392-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-124392 cp testdata/cp-test.txt multinode-124392-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-124392 ssh -n multinode-124392-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-124392 cp multinode-124392-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1350473294/001/cp-test_multinode-124392-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-124392 ssh -n multinode-124392-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-124392 cp multinode-124392-m02:/home/docker/cp-test.txt multinode-124392:/home/docker/cp-test_multinode-124392-m02_multinode-124392.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-124392 ssh -n multinode-124392-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-124392 ssh -n multinode-124392 "sudo cat /home/docker/cp-test_multinode-124392-m02_multinode-124392.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-124392 cp multinode-124392-m02:/home/docker/cp-test.txt multinode-124392-m03:/home/docker/cp-test_multinode-124392-m02_multinode-124392-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-124392 ssh -n multinode-124392-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-124392 ssh -n multinode-124392-m03 "sudo cat /home/docker/cp-test_multinode-124392-m02_multinode-124392-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-124392 cp testdata/cp-test.txt multinode-124392-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-124392 ssh -n multinode-124392-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-124392 cp multinode-124392-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1350473294/001/cp-test_multinode-124392-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-124392 ssh -n multinode-124392-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-124392 cp multinode-124392-m03:/home/docker/cp-test.txt multinode-124392:/home/docker/cp-test_multinode-124392-m03_multinode-124392.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-124392 ssh -n multinode-124392-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-124392 ssh -n multinode-124392 "sudo cat /home/docker/cp-test_multinode-124392-m03_multinode-124392.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-124392 cp multinode-124392-m03:/home/docker/cp-test.txt multinode-124392-m02:/home/docker/cp-test_multinode-124392-m03_multinode-124392-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-124392 ssh -n multinode-124392-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-124392 ssh -n multinode-124392-m02 "sudo cat /home/docker/cp-test_multinode-124392-m03_multinode-124392-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.80s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-arm64 -p multinode-124392 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-arm64 -p multinode-124392 node stop m03: (1.24118869s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-arm64 -p multinode-124392 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-124392 status: exit status 7 (571.694528ms)

                                                
                                                
-- stdout --
	multinode-124392
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-124392-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-124392-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-arm64 -p multinode-124392 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-124392 status --alsologtostderr: exit status 7 (560.61275ms)

                                                
                                                
-- stdout --
	multinode-124392
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-124392-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-124392-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0530 21:12:50.006988 2370183 out.go:296] Setting OutFile to fd 1 ...
	I0530 21:12:50.007255 2370183 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 21:12:50.007267 2370183 out.go:309] Setting ErrFile to fd 2...
	I0530 21:12:50.007274 2370183 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 21:12:50.007482 2370183 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16597-2288886/.minikube/bin
	I0530 21:12:50.007767 2370183 out.go:303] Setting JSON to false
	I0530 21:12:50.007905 2370183 mustload.go:65] Loading cluster: multinode-124392
	I0530 21:12:50.008518 2370183 config.go:182] Loaded profile config "multinode-124392": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.2
	I0530 21:12:50.008551 2370183 status.go:255] checking status of multinode-124392 ...
	I0530 21:12:50.009224 2370183 cli_runner.go:164] Run: docker container inspect multinode-124392 --format={{.State.Status}}
	I0530 21:12:50.009407 2370183 notify.go:220] Checking for updates...
	I0530 21:12:50.032892 2370183 status.go:330] multinode-124392 host status = "Running" (err=<nil>)
	I0530 21:12:50.032921 2370183 host.go:66] Checking if "multinode-124392" exists ...
	I0530 21:12:50.033345 2370183 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-124392
	I0530 21:12:50.056397 2370183 host.go:66] Checking if "multinode-124392" exists ...
	I0530 21:12:50.056743 2370183 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0530 21:12:50.056788 2370183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-124392
	I0530 21:12:50.084963 2370183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41021 SSHKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/machines/multinode-124392/id_rsa Username:docker}
	I0530 21:12:50.179848 2370183 ssh_runner.go:195] Run: systemctl --version
	I0530 21:12:50.185791 2370183 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0530 21:12:50.199630 2370183 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0530 21:12:50.268935 2370183 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:55 SystemTime:2023-05-30 21:12:50.258945563 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1036-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0530 21:12:50.269683 2370183 kubeconfig.go:92] found "multinode-124392" server: "https://192.168.58.2:8443"
	I0530 21:12:50.269716 2370183 api_server.go:166] Checking apiserver status ...
	I0530 21:12:50.269763 2370183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0530 21:12:50.283460 2370183 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1284/cgroup
	I0530 21:12:50.295140 2370183 api_server.go:182] apiserver freezer: "11:freezer:/docker/9830917986cdaf70eb3de397f1efa487efdd55bf751b8d3b8639f478361c1200/kubepods/burstable/pod524fa0c3b8392215c82126e5186d1fb0/473f610b1192c232decdcacca93d71bcb0d9e167b37bb77da6ee7f5b8567e144"
	I0530 21:12:50.295210 2370183 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/9830917986cdaf70eb3de397f1efa487efdd55bf751b8d3b8639f478361c1200/kubepods/burstable/pod524fa0c3b8392215c82126e5186d1fb0/473f610b1192c232decdcacca93d71bcb0d9e167b37bb77da6ee7f5b8567e144/freezer.state
	I0530 21:12:50.306244 2370183 api_server.go:204] freezer state: "THAWED"
	I0530 21:12:50.306272 2370183 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0530 21:12:50.315297 2370183 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0530 21:12:50.315323 2370183 status.go:421] multinode-124392 apiserver status = Running (err=<nil>)
	I0530 21:12:50.315355 2370183 status.go:257] multinode-124392 status: &{Name:multinode-124392 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0530 21:12:50.315380 2370183 status.go:255] checking status of multinode-124392-m02 ...
	I0530 21:12:50.315698 2370183 cli_runner.go:164] Run: docker container inspect multinode-124392-m02 --format={{.State.Status}}
	I0530 21:12:50.336201 2370183 status.go:330] multinode-124392-m02 host status = "Running" (err=<nil>)
	I0530 21:12:50.336224 2370183 host.go:66] Checking if "multinode-124392-m02" exists ...
	I0530 21:12:50.336535 2370183 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-124392-m02
	I0530 21:12:50.355046 2370183 host.go:66] Checking if "multinode-124392-m02" exists ...
	I0530 21:12:50.355423 2370183 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0530 21:12:50.355475 2370183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-124392-m02
	I0530 21:12:50.375386 2370183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41026 SSHKeyPath:/home/jenkins/minikube-integration/16597-2288886/.minikube/machines/multinode-124392-m02/id_rsa Username:docker}
	I0530 21:12:50.471724 2370183 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0530 21:12:50.485179 2370183 status.go:257] multinode-124392-m02 status: &{Name:multinode-124392-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0530 21:12:50.485213 2370183 status.go:255] checking status of multinode-124392-m03 ...
	I0530 21:12:50.485553 2370183 cli_runner.go:164] Run: docker container inspect multinode-124392-m03 --format={{.State.Status}}
	I0530 21:12:50.504354 2370183 status.go:330] multinode-124392-m03 host status = "Stopped" (err=<nil>)
	I0530 21:12:50.504376 2370183 status.go:343] host is not running, skipping remaining checks
	I0530 21:12:50.504385 2370183 status.go:257] multinode-124392-m03 status: &{Name:multinode-124392-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.37s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (20.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-124392 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-arm64 -p multinode-124392 node start m03 --alsologtostderr: (19.743416957s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-124392 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (20.59s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (139.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-124392
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-124392
E0530 21:13:34.569643 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/client.crt: no such file or directory
multinode_test.go:290: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-124392: (25.183887497s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-124392 --wait=true -v=8 --alsologtostderr
E0530 21:13:38.505177 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/client.crt: no such file or directory
E0530 21:14:06.187979 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/client.crt: no such file or directory
E0530 21:14:57.615191 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-arm64 start -p multinode-124392 --wait=true -v=8 --alsologtostderr: (1m54.635475688s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-124392
--- PASS: TestMultiNode/serial/RestartKeepsNodes (139.96s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-arm64 -p multinode-124392 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-arm64 -p multinode-124392 node delete m03: (4.42168046s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-arm64 -p multinode-124392 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.21s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p multinode-124392 stop
E0530 21:15:44.070837 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/functional-812242/client.crt: no such file or directory
multinode_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p multinode-124392 stop: (24.162461129s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-arm64 -p multinode-124392 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-124392 status: exit status 7 (96.881087ms)

                                                
                                                
-- stdout --
	multinode-124392
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-124392-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-arm64 -p multinode-124392 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-124392 status --alsologtostderr: exit status 7 (96.106218ms)

                                                
                                                
-- stdout --
	multinode-124392
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-124392-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0530 21:16:00.575630 2378832 out.go:296] Setting OutFile to fd 1 ...
	I0530 21:16:00.575822 2378832 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 21:16:00.575848 2378832 out.go:309] Setting ErrFile to fd 2...
	I0530 21:16:00.575868 2378832 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 21:16:00.576045 2378832 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16597-2288886/.minikube/bin
	I0530 21:16:00.576266 2378832 out.go:303] Setting JSON to false
	I0530 21:16:00.576394 2378832 mustload.go:65] Loading cluster: multinode-124392
	I0530 21:16:00.576481 2378832 notify.go:220] Checking for updates...
	I0530 21:16:00.576832 2378832 config.go:182] Loaded profile config "multinode-124392": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.2
	I0530 21:16:00.576868 2378832 status.go:255] checking status of multinode-124392 ...
	I0530 21:16:00.577421 2378832 cli_runner.go:164] Run: docker container inspect multinode-124392 --format={{.State.Status}}
	I0530 21:16:00.601118 2378832 status.go:330] multinode-124392 host status = "Stopped" (err=<nil>)
	I0530 21:16:00.601139 2378832 status.go:343] host is not running, skipping remaining checks
	I0530 21:16:00.601147 2378832 status.go:257] multinode-124392 status: &{Name:multinode-124392 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0530 21:16:00.601189 2378832 status.go:255] checking status of multinode-124392-m02 ...
	I0530 21:16:00.601523 2378832 cli_runner.go:164] Run: docker container inspect multinode-124392-m02 --format={{.State.Status}}
	I0530 21:16:00.621784 2378832 status.go:330] multinode-124392-m02 host status = "Stopped" (err=<nil>)
	I0530 21:16:00.621810 2378832 status.go:343] host is not running, skipping remaining checks
	I0530 21:16:00.621818 2378832 status.go:257] multinode-124392-m02 status: &{Name:multinode-124392-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.36s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (87.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-124392 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:354: (dbg) Done: out/minikube-linux-arm64 start -p multinode-124392 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m26.729197643s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-arm64 -p multinode-124392 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (87.61s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-124392
multinode_test.go:452: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-124392-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-124392-m02 --driver=docker  --container-runtime=containerd: exit status 14 (90.230357ms)

                                                
                                                
-- stdout --
	* [multinode-124392-m02] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16597-2288886/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16597-2288886/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-124392-m02' is duplicated with machine name 'multinode-124392-m02' in profile 'multinode-124392'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-124392-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:460: (dbg) Done: out/minikube-linux-arm64 start -p multinode-124392-m03 --driver=docker  --container-runtime=containerd: (32.500617093s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-124392
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-124392: exit status 80 (630.098772ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-124392
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-124392-m03 already exists in multinode-124392-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-124392-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-124392-m03: (2.065016071s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.35s)

                                                
                                    
x
+
TestPreload (179.35s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-421553 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
E0530 21:18:34.569416 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/client.crt: no such file or directory
E0530 21:18:38.505496 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-421553 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m17.798206343s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 ssh -p test-preload-421553 -- sudo crictl pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 ssh -p test-preload-421553 -- sudo crictl pull gcr.io/k8s-minikube/busybox: (1.456064102s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-421553
preload_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-421553: (12.112824607s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-421553 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
E0530 21:20:44.070854 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/functional-812242/client.crt: no such file or directory
preload_test.go:71: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-421553 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (1m25.213615071s)
preload_test.go:80: (dbg) Run:  out/minikube-linux-arm64 ssh -p test-preload-421553 -- sudo crictl image ls
helpers_test.go:175: Cleaning up "test-preload-421553" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-421553
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-421553: (2.419893254s)
--- PASS: TestPreload (179.35s)

                                                
                                    
x
+
TestScheduledStopUnix (111.87s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-713450 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-713450 --memory=2048 --driver=docker  --container-runtime=containerd: (35.181919392s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-713450 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-713450 -n scheduled-stop-713450
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-713450 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-713450 --cancel-scheduled
E0530 21:22:07.114070 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/functional-812242/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-713450 -n scheduled-stop-713450
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-713450
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-713450 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-713450
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-713450: exit status 7 (70.225585ms)

                                                
                                                
-- stdout --
	scheduled-stop-713450
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-713450 -n scheduled-stop-713450
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-713450 -n scheduled-stop-713450: exit status 7 (74.838047ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-713450" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-713450
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-713450: (5.057501378s)
--- PASS: TestScheduledStopUnix (111.87s)

                                                
                                    
x
+
TestInsufficientStorage (13s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-518617 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-518617 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (10.452161741s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d4cf6472-736b-49fb-9d29-f1141fc7e17c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-518617] minikube v1.30.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"65e0d2ca-9dd7-4438-b6bd-3fe7fa2c9f17","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16597"}}
	{"specversion":"1.0","id":"bdf48c6c-dee7-4aa8-80f8-977fa692c392","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b7361255-e7a4-462e-91ab-7dca0246dc7d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/16597-2288886/kubeconfig"}}
	{"specversion":"1.0","id":"5996b10f-31c1-4bc5-9d5e-a0cca55c9429","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/16597-2288886/.minikube"}}
	{"specversion":"1.0","id":"0108c2dc-d6cf-4b6d-8d6c-20bfea01ce35","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"d9a581fb-82f5-463f-a4d3-22c568c11391","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ec6697ed-3719-4e29-b2e6-54f95d8c3697","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"b974dc22-cf88-4d83-a13c-9150e2d22f5c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"4e928444-fae8-441c-bdfd-69659c171477","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"769ce745-aecc-4ab5-abf1-b614a84be7df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"658f05ce-b9f4-4a4d-a2d1-caa5fc859881","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-518617 in cluster insufficient-storage-518617","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"7ab7b767-f8c9-4db6-b66e-0fa9a4e99742","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"2ee75d4a-3fa1-4892-b8ca-f0c4d95d5a9e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"42f2d0f4-d570-49f2-a538-7dc796fb9a40","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-518617 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-518617 --output=json --layout=cluster: exit status 7 (303.963625ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-518617","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.30.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-518617","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0530 21:23:09.583437 2396201 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-518617" does not appear in /home/jenkins/minikube-integration/16597-2288886/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-518617 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-518617 --output=json --layout=cluster: exit status 7 (307.047435ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-518617","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.30.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-518617","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0530 21:23:09.894419 2396255 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-518617" does not appear in /home/jenkins/minikube-integration/16597-2288886/kubeconfig
	E0530 21:23:09.906694 2396255 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/insufficient-storage-518617/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-518617" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-518617
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-518617: (1.936458824s)
--- PASS: TestInsufficientStorage (13.00s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (117.2s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:132: (dbg) Run:  /tmp/minikube-v1.22.0.3194068375.exe start -p running-upgrade-884304 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:132: (dbg) Done: /tmp/minikube-v1.22.0.3194068375.exe start -p running-upgrade-884304 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (1m20.922059299s)
version_upgrade_test.go:142: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-884304 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0530 21:33:34.569539 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/client.crt: no such file or directory
E0530 21:33:38.505160 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/client.crt: no such file or directory
version_upgrade_test.go:142: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-884304 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (32.276870653s)
helpers_test.go:175: Cleaning up "running-upgrade-884304" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-884304
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-884304: (2.624007398s)
--- PASS: TestRunningBinaryUpgrade (117.20s)

                                                
                                    
x
+
TestKubernetesUpgrade (424.78s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-908434 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-908434 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (59.869158266s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-908434
version_upgrade_test.go:239: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-908434: (1.443434158s)
version_upgrade_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-908434 status --format={{.Host}}
version_upgrade_test.go:244: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-908434 status --format={{.Host}}: exit status 7 (87.179918ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:246: status error: exit status 7 (may be ok)
version_upgrade_test.go:255: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-908434 --memory=2200 --kubernetes-version=v1.27.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:255: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-908434 --memory=2200 --kubernetes-version=v1.27.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (5m26.195580181s)
version_upgrade_test.go:260: (dbg) Run:  kubectl --context kubernetes-upgrade-908434 version --output=json
version_upgrade_test.go:279: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:281: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-908434 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:281: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-908434 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=containerd: exit status 106 (156.015076ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-908434] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16597-2288886/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16597-2288886/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.27.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-908434
	    minikube start -p kubernetes-upgrade-908434 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9084342 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.27.2, by running:
	    
	    minikube start -p kubernetes-upgrade-908434 --kubernetes-version=v1.27.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:285: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:287: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-908434 --memory=2200 --kubernetes-version=v1.27.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0530 21:31:37.615868 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/client.crt: no such file or directory
version_upgrade_test.go:287: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-908434 --memory=2200 --kubernetes-version=v1.27.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (34.686916484s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-908434" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-908434
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-908434: (2.178847715s)
--- PASS: TestKubernetesUpgrade (424.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-756226 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-756226 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (84.408004ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-756226] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16597-2288886/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16597-2288886/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestPause/serial/Start (92.4s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-744148 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-744148 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m32.401314914s)
--- PASS: TestPause/serial/Start (92.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (47.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-756226 --driver=docker  --container-runtime=containerd
E0530 21:23:34.569387 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/client.crt: no such file or directory
E0530 21:23:38.505896 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-756226 --driver=docker  --container-runtime=containerd: (46.884293192s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-756226 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (47.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-756226 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-756226 --no-kubernetes --driver=docker  --container-runtime=containerd: (14.244149821s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-756226 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-756226 status -o json: exit status 2 (324.472295ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-756226","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-756226
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-756226: (1.995547906s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.79s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-756226 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-756226 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.787418275s)
--- PASS: TestNoKubernetes/serial/Start (5.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-756226 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-756226 "sudo systemctl is-active --quiet service kubelet": exit status 1 (281.848248ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-756226
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-756226: (1.238001152s)
--- PASS: TestNoKubernetes/serial/Stop (1.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-756226 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-756226 --driver=docker  --container-runtime=containerd: (6.629275962s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-756226 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-756226 "sudo systemctl is-active --quiet service kubelet": exit status 1 (304.112941ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.30s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (12.37s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-744148 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-744148 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (12.352611597s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (12.37s)

                                                
                                    
x
+
TestPause/serial/Pause (0.99s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-744148 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.99s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.38s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-744148 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-744148 --output=json --layout=cluster: exit status 2 (376.833927ms)

                                                
                                                
-- stdout --
	{"Name":"pause-744148","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.30.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-744148","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.38s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.01s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-744148 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-linux-arm64 unpause -p pause-744148 --alsologtostderr -v=5: (1.013821447s)
--- PASS: TestPause/serial/Unpause (1.01s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.11s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-744148 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-744148 --alsologtostderr -v=5: (1.112519007s)
--- PASS: TestPause/serial/PauseAgain (1.11s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.68s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-744148 --alsologtostderr -v=5
E0530 21:25:01.548275 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/client.crt: no such file or directory
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-744148 --alsologtostderr -v=5: (2.678952598s)
--- PASS: TestPause/serial/DeletePaused (2.68s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.16s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-744148
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-744148: exit status 1 (20.614031ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-744148: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.16s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.37s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:230: (dbg) Run:  out/minikube-linux-arm64 start -p false-909248 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-909248 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (213.167724ms)

                                                
                                                
-- stdout --
	* [false-909248] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16597-2288886/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16597-2288886/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0530 21:34:47.519269 2439456 out.go:296] Setting OutFile to fd 1 ...
	I0530 21:34:47.519465 2439456 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 21:34:47.519477 2439456 out.go:309] Setting ErrFile to fd 2...
	I0530 21:34:47.519484 2439456 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 21:34:47.519663 2439456 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16597-2288886/.minikube/bin
	I0530 21:34:47.520131 2439456 out.go:303] Setting JSON to false
	I0530 21:34:47.521219 2439456 start.go:125] hostinfo: {"hostname":"ip-172-31-31-251","uptime":177387,"bootTime":1685305101,"procs":239,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1036-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0530 21:34:47.521292 2439456 start.go:135] virtualization:  
	I0530 21:34:47.526569 2439456 out.go:177] * [false-909248] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	I0530 21:34:47.528772 2439456 out.go:177]   - MINIKUBE_LOCATION=16597
	I0530 21:34:47.528896 2439456 notify.go:220] Checking for updates...
	I0530 21:34:47.531314 2439456 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0530 21:34:47.534206 2439456 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16597-2288886/kubeconfig
	I0530 21:34:47.536385 2439456 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16597-2288886/.minikube
	I0530 21:34:47.538675 2439456 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0530 21:34:47.541192 2439456 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0530 21:34:47.544581 2439456 config.go:182] Loaded profile config "stopped-upgrade-708012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.2
	I0530 21:34:47.544715 2439456 driver.go:375] Setting default libvirt URI to qemu:///system
	I0530 21:34:47.574598 2439456 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0530 21:34:47.574754 2439456 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0530 21:34:47.666296 2439456 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2023-05-30 21:34:47.655490084 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1036-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0530 21:34:47.666413 2439456 docker.go:294] overlay module found
	I0530 21:34:47.670112 2439456 out.go:177] * Using the docker driver based on user configuration
	I0530 21:34:47.672395 2439456 start.go:295] selected driver: docker
	I0530 21:34:47.672414 2439456 start.go:870] validating driver "docker" against <nil>
	I0530 21:34:47.672428 2439456 start.go:881] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0530 21:34:47.675410 2439456 out.go:177] 
	W0530 21:34:47.677769 2439456 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0530 21:34:47.679777 2439456 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:86: 
----------------------- debugLogs start: false-909248 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-909248

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-909248

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-909248

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-909248

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-909248

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-909248

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-909248

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-909248

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-909248

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-909248

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909248"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909248"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909248"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-909248

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909248"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909248"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-909248" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-909248" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-909248" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-909248" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-909248" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-909248" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-909248" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-909248" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909248"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909248"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909248"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909248"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909248"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-909248" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-909248" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-909248" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909248"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909248"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909248"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909248"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909248"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/16597-2288886/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 30 May 2023 21:28:19 UTC
provider: minikube.sigs.k8s.io
version: v1.30.1
name: cluster_info
server: https://192.168.76.2:8443
name: stopped-upgrade-708012
contexts:
- context:
cluster: stopped-upgrade-708012
user: stopped-upgrade-708012
name: stopped-upgrade-708012
current-context: ""
kind: Config
preferences: {}
users:
- name: stopped-upgrade-708012
user:
client-certificate: /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/stopped-upgrade-708012/client.crt
client-key: /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/stopped-upgrade-708012/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-909248

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909248"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909248"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909248"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909248"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909248"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909248"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909248"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909248"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909248"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909248"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909248"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909248"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909248"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909248"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909248"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909248"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909248"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909248"

                                                
                                                
----------------------- debugLogs end: false-909248 [took: 3.517460157s] --------------------------------
helpers_test.go:175: Cleaning up "false-909248" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-909248
--- PASS: TestNetworkPlugins/group/false (3.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (127.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-495193 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0
E0530 21:40:44.070728 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/functional-812242/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-495193 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: (2m7.34397629s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (127.35s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.11s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:218: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-708012
version_upgrade_test.go:218: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-708012: (1.112407702s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (75.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-526452 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.2
E0530 21:41:41.548733 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-526452 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.2: (1m15.402902785s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (75.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.74s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-495193 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9c09ad88-6f53-4789-8e8f-a8cb8c86fa4f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9c09ad88-6f53-4789-8e8f-a8cb8c86fa4f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.042267738s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-495193 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.74s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.72s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-526452 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d043c411-3fd4-4931-b3b7-272cae1cedf6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d043c411-3fd4-4931-b3b7-272cae1cedf6] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.035491609s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-526452 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-495193 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-495193 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-526452 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-526452 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.628843134s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-526452 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-495193 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-495193 --alsologtostderr -v=3: (12.525480431s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.53s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-526452 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-526452 --alsologtostderr -v=3: (12.419991996s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-495193 -n old-k8s-version-495193
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-495193 -n old-k8s-version-495193: exit status 7 (76.352196ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-495193 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (680.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-495193 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-495193 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: (11m19.998405075s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-495193 -n old-k8s-version-495193
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (680.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-526452 -n no-preload-526452
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-526452 -n no-preload-526452: exit status 7 (105.095062ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-526452 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (349.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-526452 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.2
E0530 21:43:34.568654 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/client.crt: no such file or directory
E0530 21:43:38.505823 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/client.crt: no such file or directory
E0530 21:45:44.070671 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/functional-812242/client.crt: no such file or directory
E0530 21:48:17.616236 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-526452 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.2: (5m49.190397238s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-526452 -n no-preload-526452
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (349.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (9.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-l99q6" [98877195-7dcb-43d7-b226-55730d016968] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-l99q6" [98877195-7dcb-43d7-b226-55730d016968] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.024489909s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (9.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-l99q6" [98877195-7dcb-43d7-b226-55730d016968] Running
E0530 21:48:34.569371 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011174392s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-526452 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p no-preload-526452 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.85s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-526452 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-526452 --alsologtostderr -v=1: (1.154986026s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-526452 -n no-preload-526452
E0530 21:48:38.505843 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-526452 -n no-preload-526452: exit status 2 (384.119764ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-526452 -n no-preload-526452
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-526452 -n no-preload-526452: exit status 2 (386.125552ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-526452 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-526452 -n no-preload-526452
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-526452 -n no-preload-526452
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.85s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (69.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-434389 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-434389 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.2: (1m9.96584649s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (69.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-434389 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9fcdb79e-2aae-4b97-b9e1-cfad8e01d4b8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9fcdb79e-2aae-4b97-b9e1-cfad8e01d4b8] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.02600363s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-434389 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-434389 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-434389 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.022111056s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-434389 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-434389 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-434389 --alsologtostderr -v=3: (12.289529695s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-434389 -n embed-certs-434389
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-434389 -n embed-certs-434389: exit status 7 (78.326584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-434389 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (353.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-434389 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.2
E0530 21:50:44.070080 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/functional-812242/client.crt: no such file or directory
E0530 21:52:08.758947 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/no-preload-526452/client.crt: no such file or directory
E0530 21:52:08.764257 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/no-preload-526452/client.crt: no such file or directory
E0530 21:52:08.774732 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/no-preload-526452/client.crt: no such file or directory
E0530 21:52:08.794987 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/no-preload-526452/client.crt: no such file or directory
E0530 21:52:08.835234 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/no-preload-526452/client.crt: no such file or directory
E0530 21:52:08.915566 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/no-preload-526452/client.crt: no such file or directory
E0530 21:52:09.076067 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/no-preload-526452/client.crt: no such file or directory
E0530 21:52:09.396495 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/no-preload-526452/client.crt: no such file or directory
E0530 21:52:10.037479 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/no-preload-526452/client.crt: no such file or directory
E0530 21:52:11.317886 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/no-preload-526452/client.crt: no such file or directory
E0530 21:52:13.878309 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/no-preload-526452/client.crt: no such file or directory
E0530 21:52:18.999035 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/no-preload-526452/client.crt: no such file or directory
E0530 21:52:29.239594 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/no-preload-526452/client.crt: no such file or directory
E0530 21:52:49.719874 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/no-preload-526452/client.crt: no such file or directory
E0530 21:53:30.680331 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/no-preload-526452/client.crt: no such file or directory
E0530 21:53:34.568857 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/client.crt: no such file or directory
E0530 21:53:38.505729 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-434389 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.2: (5m52.604505566s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-434389 -n embed-certs-434389
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (353.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-2c2bw" [f8071644-d60a-47c8-8a8a-0152bd581c67] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.022101355s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-2c2bw" [f8071644-d60a-47c8-8a8a-0152bd581c67] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006823431s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-495193 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p old-k8s-version-495193 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-495193 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-495193 -n old-k8s-version-495193
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-495193 -n old-k8s-version-495193: exit status 2 (343.199755ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-495193 -n old-k8s-version-495193
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-495193 -n old-k8s-version-495193: exit status 2 (370.239076ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-495193 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-495193 -n old-k8s-version-495193
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-495193 -n old-k8s-version-495193
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (83.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-865120 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.2
E0530 21:54:52.600568 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/no-preload-526452/client.crt: no such file or directory
E0530 21:55:27.116778 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/functional-812242/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-865120 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.2: (1m23.914618767s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (83.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.78s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-865120 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8e3be9b2-b9e8-4153-86ff-f6264eb12e72] Pending
helpers_test.go:344: "busybox" [8e3be9b2-b9e8-4153-86ff-f6264eb12e72] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8e3be9b2-b9e8-4153-86ff-f6264eb12e72] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.041749176s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-865120 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.78s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-865120 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-865120 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.56s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-865120 --alsologtostderr -v=3
E0530 21:55:44.070991 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/functional-812242/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-865120 --alsologtostderr -v=3: (12.556896484s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.56s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-865120 -n default-k8s-diff-port-865120
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-865120 -n default-k8s-diff-port-865120: exit status 7 (75.813251ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-865120 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (348.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-865120 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-865120 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.2: (5m48.151787219s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-865120 -n default-k8s-diff-port-865120
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (348.73s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-q5vkj" [9212a77f-070b-47ba-8812-90778c3c41d9] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-q5vkj" [9212a77f-070b-47ba-8812-90778c3c41d9] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.030186221s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-q5vkj" [9212a77f-070b-47ba-8812-90778c3c41d9] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.02049907s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-434389 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p embed-certs-434389 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (5.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-434389 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p embed-certs-434389 --alsologtostderr -v=1: (1.466965583s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-434389 -n embed-certs-434389
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-434389 -n embed-certs-434389: exit status 2 (545.1919ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-434389 -n embed-certs-434389
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-434389 -n embed-certs-434389: exit status 2 (531.929439ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-434389 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p embed-certs-434389 --alsologtostderr -v=1: (1.376615557s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-434389 -n embed-certs-434389
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-434389 -n embed-certs-434389
--- PASS: TestStartStop/group/embed-certs/serial/Pause (5.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (43.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-429515 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.2
E0530 21:57:08.256538 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/old-k8s-version-495193/client.crt: no such file or directory
E0530 21:57:08.262108 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/old-k8s-version-495193/client.crt: no such file or directory
E0530 21:57:08.272575 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/old-k8s-version-495193/client.crt: no such file or directory
E0530 21:57:08.292846 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/old-k8s-version-495193/client.crt: no such file or directory
E0530 21:57:08.333167 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/old-k8s-version-495193/client.crt: no such file or directory
E0530 21:57:08.413485 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/old-k8s-version-495193/client.crt: no such file or directory
E0530 21:57:08.574447 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/old-k8s-version-495193/client.crt: no such file or directory
E0530 21:57:08.759859 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/no-preload-526452/client.crt: no such file or directory
E0530 21:57:08.895119 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/old-k8s-version-495193/client.crt: no such file or directory
E0530 21:57:09.536172 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/old-k8s-version-495193/client.crt: no such file or directory
E0530 21:57:10.816400 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/old-k8s-version-495193/client.crt: no such file or directory
E0530 21:57:13.377240 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/old-k8s-version-495193/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-429515 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.2: (43.171707013s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (43.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-429515 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-429515 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.063158292s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-429515 --alsologtostderr -v=3
E0530 21:57:18.497756 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/old-k8s-version-495193/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-429515 --alsologtostderr -v=3: (1.291685299s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-429515 -n newest-cni-429515
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-429515 -n newest-cni-429515: exit status 7 (74.592864ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-429515 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (44.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-429515 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.2
E0530 21:57:28.737959 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/old-k8s-version-495193/client.crt: no such file or directory
E0530 21:57:36.440805 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/no-preload-526452/client.crt: no such file or directory
E0530 21:57:49.219035 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/old-k8s-version-495193/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-429515 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.2: (44.482576952s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-429515 -n newest-cni-429515
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (44.88s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p newest-cni-429515 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-429515 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-429515 -n newest-cni-429515
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-429515 -n newest-cni-429515: exit status 2 (379.857243ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-429515 -n newest-cni-429515
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-429515 -n newest-cni-429515: exit status 2 (423.675649ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-429515 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-429515 -n newest-cni-429515
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-429515 -n newest-cni-429515
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (86.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-arm64 start -p auto-909248 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
E0530 21:58:21.549316 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/client.crt: no such file or directory
E0530 21:58:30.179285 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/old-k8s-version-495193/client.crt: no such file or directory
E0530 21:58:34.568829 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/client.crt: no such file or directory
E0530 21:58:38.505789 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-arm64 start -p auto-909248 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m26.764172851s)
--- PASS: TestNetworkPlugins/group/auto/Start (86.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-909248 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context auto-909248 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-bhfxw" [949293e1-1831-4445-aa01-b6f9f0f79140] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-bhfxw" [949293e1-1831-4445-aa01-b6f9f0f79140] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.008810447s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:174: (dbg) Run:  kubectl --context auto-909248 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:193: (dbg) Run:  kubectl --context auto-909248 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:248: (dbg) Run:  kubectl --context auto-909248 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (55.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-909248 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E0530 22:00:44.070414 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/functional-812242/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-909248 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (55.763947103s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (55.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-9wg89" [d41d2206-c12d-4775-b920-9a796a810808] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.029380056s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-909248 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kindnet-909248 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-b4t25" [aaa361bb-8bc2-4116-ae15-8f1b47d7908f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-b4t25" [aaa361bb-8bc2-4116-ae15-8f1b47d7908f] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.011177132s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kindnet-909248 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kindnet-909248 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kindnet-909248 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (17.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-79jb6" [78e843ce-3b87-4af2-8f3e-c8b338cc9798] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-79jb6" [78e843ce-3b87-4af2-8f3e-c8b338cc9798] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 17.028437797s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (17.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (82.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-arm64 start -p calico-909248 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:111: (dbg) Done: out/minikube-linux-arm64 start -p calico-909248 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m22.634234811s)
--- PASS: TestNetworkPlugins/group/calico/Start (82.63s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-79jb6" [78e843ce-3b87-4af2-8f3e-c8b338cc9798] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00972215s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-865120 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p default-k8s-diff-port-865120 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-865120 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-865120 --alsologtostderr -v=1: (1.11914744s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-865120 -n default-k8s-diff-port-865120
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-865120 -n default-k8s-diff-port-865120: exit status 2 (486.647814ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-865120 -n default-k8s-diff-port-865120
E0530 22:02:08.256470 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/old-k8s-version-495193/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-865120 -n default-k8s-diff-port-865120: exit status 2 (420.708601ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-865120 --alsologtostderr -v=1
E0530 22:02:08.759687 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/no-preload-526452/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-865120 --alsologtostderr -v=1: (1.110265236s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-865120 -n default-k8s-diff-port-865120
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-865120 -n default-k8s-diff-port-865120
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.45s)
E0530 22:06:06.869665 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/kindnet-909248/client.crt: no such file or directory
E0530 22:06:06.874986 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/kindnet-909248/client.crt: no such file or directory
E0530 22:06:06.885265 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/kindnet-909248/client.crt: no such file or directory
E0530 22:06:06.905544 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/kindnet-909248/client.crt: no such file or directory
E0530 22:06:06.945819 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/kindnet-909248/client.crt: no such file or directory
E0530 22:06:07.026270 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/kindnet-909248/client.crt: no such file or directory
E0530 22:06:07.186696 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/kindnet-909248/client.crt: no such file or directory
E0530 22:06:07.507320 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/kindnet-909248/client.crt: no such file or directory
E0530 22:06:08.148155 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/kindnet-909248/client.crt: no such file or directory
E0530 22:06:09.428387 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/kindnet-909248/client.crt: no such file or directory
E0530 22:06:11.988748 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/kindnet-909248/client.crt: no such file or directory
E0530 22:06:13.975859 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/default-k8s-diff-port-865120/client.crt: no such file or directory
E0530 22:06:17.108989 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/kindnet-909248/client.crt: no such file or directory
E0530 22:06:27.349545 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/kindnet-909248/client.crt: no such file or directory
E0530 22:06:47.829863 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/kindnet-909248/client.crt: no such file or directory
E0530 22:06:54.936979 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/default-k8s-diff-port-865120/client.crt: no such file or directory
E0530 22:07:08.257108 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/old-k8s-version-495193/client.crt: no such file or directory
E0530 22:07:08.759227 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/no-preload-526452/client.crt: no such file or directory
E0530 22:07:22.132890 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/auto-909248/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (73.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-909248 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
E0530 22:02:35.941663 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/old-k8s-version-495193/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-909248 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m13.929246936s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (73.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-dpzzp" [cd6b8f9d-e2a5-42c0-85de-d5a7c760aa7b] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.030475146s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-909248 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context calico-909248 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-x9c7p" [930cbbbb-df17-47f7-ac94-66fbaa43f318] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-x9c7p" [930cbbbb-df17-47f7-ac94-66fbaa43f318] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.021958111s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:174: (dbg) Run:  kubectl --context calico-909248 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:193: (dbg) Run:  kubectl --context calico-909248 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:248: (dbg) Run:  kubectl --context calico-909248 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-909248 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context custom-flannel-909248 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-r9q5c" [42eff173-61c7-4490-8f64-e752e576fd32] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-r9q5c" [42eff173-61c7-4490-8f64-e752e576fd32] Running
E0530 22:03:34.569129 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.017669994s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context custom-flannel-909248 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context custom-flannel-909248 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E0530 22:03:38.506144 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/ingress-addon-legacy-208395/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context custom-flannel-909248 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (90.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-909248 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:111: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-909248 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m30.783828982s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (90.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (74.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-909248 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E0530 22:04:38.290908 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/auto-909248/client.crt: no such file or directory
E0530 22:04:38.296107 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/auto-909248/client.crt: no such file or directory
E0530 22:04:38.306721 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/auto-909248/client.crt: no such file or directory
E0530 22:04:38.326953 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/auto-909248/client.crt: no such file or directory
E0530 22:04:38.367191 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/auto-909248/client.crt: no such file or directory
E0530 22:04:38.447427 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/auto-909248/client.crt: no such file or directory
E0530 22:04:38.607872 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/auto-909248/client.crt: no such file or directory
E0530 22:04:38.928163 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/auto-909248/client.crt: no such file or directory
E0530 22:04:39.568992 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/auto-909248/client.crt: no such file or directory
E0530 22:04:40.849708 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/auto-909248/client.crt: no such file or directory
E0530 22:04:43.409891 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/auto-909248/client.crt: no such file or directory
E0530 22:04:48.530361 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/auto-909248/client.crt: no such file or directory
E0530 22:04:57.617052 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/addons-084881/client.crt: no such file or directory
E0530 22:04:58.771466 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/auto-909248/client.crt: no such file or directory
E0530 22:05:19.251823 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/auto-909248/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-arm64 start -p flannel-909248 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (1m14.086945034s)
--- PASS: TestNetworkPlugins/group/flannel/Start (74.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-g25m8" [52de7b9e-aabe-4479-a00c-0e1fbee2368b] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.025767546s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-909248 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context enable-default-cni-909248 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-lnfn4" [13bb2d73-cb79-415a-9549-b604a031dd61] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-lnfn4" [13bb2d73-cb79-415a-9549-b604a031dd61] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.014789729s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-909248 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context flannel-909248 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-sf2x7" [dbace5e3-9652-4094-a05c-e001223a2b65] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-sf2x7" [dbace5e3-9652-4094-a05c-e001223a2b65] Running
E0530 22:05:33.014422 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/default-k8s-diff-port-865120/client.crt: no such file or directory
E0530 22:05:33.019727 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/default-k8s-diff-port-865120/client.crt: no such file or directory
E0530 22:05:33.030072 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/default-k8s-diff-port-865120/client.crt: no such file or directory
E0530 22:05:33.050373 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/default-k8s-diff-port-865120/client.crt: no such file or directory
E0530 22:05:33.090646 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/default-k8s-diff-port-865120/client.crt: no such file or directory
E0530 22:05:33.170915 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/default-k8s-diff-port-865120/client.crt: no such file or directory
E0530 22:05:33.331345 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/default-k8s-diff-port-865120/client.crt: no such file or directory
E0530 22:05:33.652042 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/default-k8s-diff-port-865120/client.crt: no such file or directory
E0530 22:05:34.293136 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/default-k8s-diff-port-865120/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.012077319s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:174: (dbg) Run:  kubectl --context enable-default-cni-909248 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:193: (dbg) Run:  kubectl --context enable-default-cni-909248 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:248: (dbg) Run:  kubectl --context enable-default-cni-909248 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0530 22:05:35.573357 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/default-k8s-diff-port-865120/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context flannel-909248 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context flannel-909248 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context flannel-909248 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (83.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-909248 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:111: (dbg) Done: out/minikube-linux-arm64 start -p bridge-909248 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m23.146096224s)
--- PASS: TestNetworkPlugins/group/bridge/Start (83.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-909248 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context bridge-909248 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-5cwj7" [b2c6a510-2af5-4767-98b9-d29629683058] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0530 22:07:28.790732 2294292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/kindnet-909248/client.crt: no such file or directory
helpers_test.go:344: "netcat-7458db8b8-5cwj7" [b2c6a510-2af5-4767-98b9-d29629683058] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.007956545s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:174: (dbg) Run:  kubectl --context bridge-909248 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:193: (dbg) Run:  kubectl --context bridge-909248 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:248: (dbg) Run:  kubectl --context bridge-909248 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    

Test skip (28/302)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.27.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.27.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.27.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.54s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-074087 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:234: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-074087" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-074087
--- SKIP: TestDownloadOnlyKic (0.54s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:420: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:35: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1782: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:458: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:545: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-002673" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-002673
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:92: Skipping the test as containerd container runtimes requires CNI
panic.go:522: 
----------------------- debugLogs start: kubenet-909248 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-909248

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-909248

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-909248

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-909248

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-909248

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-909248

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-909248

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-909248

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-909248

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-909248

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909248"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909248"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909248"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-909248

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909248"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909248"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-909248" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-909248" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-909248" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-909248" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-909248" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-909248" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-909248" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-909248" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909248"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909248"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909248"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909248"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909248"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-909248" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-909248" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-909248" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909248"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909248"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909248"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909248"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909248"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/16597-2288886/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 30 May 2023 21:28:19 UTC
provider: minikube.sigs.k8s.io
version: v1.30.1
name: cluster_info
server: https://192.168.76.2:8443
name: stopped-upgrade-708012
contexts:
- context:
cluster: stopped-upgrade-708012
user: stopped-upgrade-708012
name: stopped-upgrade-708012
current-context: ""
kind: Config
preferences: {}
users:
- name: stopped-upgrade-708012
user:
client-certificate: /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/stopped-upgrade-708012/client.crt
client-key: /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/stopped-upgrade-708012/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-909248

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909248"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909248"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909248"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909248"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909248"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909248"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909248"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909248"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909248"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909248"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909248"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909248"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909248"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909248"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909248"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909248"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909248"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909248"

                                                
                                                
----------------------- debugLogs end: kubenet-909248 [took: 3.904938329s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-909248" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-909248
--- SKIP: TestNetworkPlugins/group/kubenet (4.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:101: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-909248 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-909248

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-909248

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-909248

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-909248

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-909248

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-909248

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-909248

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-909248

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-909248

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-909248

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909248"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909248"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909248"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-909248

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909248"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909248"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-909248" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-909248" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-909248" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-909248" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-909248" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-909248" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-909248" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-909248" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909248"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909248"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909248"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909248"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909248"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-909248

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-909248

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-909248" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-909248" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-909248

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-909248

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-909248" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-909248" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-909248" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-909248" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-909248" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909248"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909248"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909248"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909248"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909248"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/16597-2288886/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 30 May 2023 21:28:19 UTC
provider: minikube.sigs.k8s.io
version: v1.30.1
name: cluster_info
server: https://192.168.76.2:8443
name: stopped-upgrade-708012
contexts:
- context:
cluster: stopped-upgrade-708012
user: stopped-upgrade-708012
name: stopped-upgrade-708012
current-context: ""
kind: Config
preferences: {}
users:
- name: stopped-upgrade-708012
user:
client-certificate: /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/stopped-upgrade-708012/client.crt
client-key: /home/jenkins/minikube-integration/16597-2288886/.minikube/profiles/stopped-upgrade-708012/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-909248

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909248"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909248"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909248"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909248"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909248"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909248"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909248"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909248"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909248"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909248"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909248"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909248"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909248"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909248"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909248"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909248"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909248"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-909248" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909248"

                                                
                                                
----------------------- debugLogs end: cilium-909248 [took: 3.760898646s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-909248" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-909248
--- SKIP: TestNetworkPlugins/group/cilium (3.93s)

                                                
                                    
Copied to clipboard