Test Report: Docker_Linux_crio_arm64 19423

                    
                      1f2c26fb323282b69eee479fdee82bbe44410c3d:2024-08-16:35811
                    
                

Test fail (2/328)

Order failed test Duration
34 TestAddons/parallel/Ingress 152.76
36 TestAddons/parallel/MetricsServer 304.47
x
+
TestAddons/parallel/Ingress (152.76s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-606349 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-606349 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-606349 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [eb36317a-c956-4ccb-8b5e-b28d4fb73bed] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [eb36317a-c956-4ccb-8b5e-b28d4fb73bed] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.00383086s
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-606349 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-606349 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.422283409s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-606349 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-606349 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-606349 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-arm64 -p addons-606349 addons disable ingress-dns --alsologtostderr -v=1: (1.670996321s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-606349 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-606349 addons disable ingress --alsologtostderr -v=1: (7.753686007s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-606349
helpers_test.go:235: (dbg) docker inspect addons-606349:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "00fb883fa653b16a5c6a3d4eaeeb799046b2388cf8d7532d6e9254c4f46b6473",
	        "Created": "2024-08-16T12:25:51.656195826Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1387978,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-16T12:25:51.797419245Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2b339a1cac4376103734d3066f7ccdf0ac7377a2f8f8d5eb9e81c29f3abcec50",
	        "ResolvConfPath": "/var/lib/docker/containers/00fb883fa653b16a5c6a3d4eaeeb799046b2388cf8d7532d6e9254c4f46b6473/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/00fb883fa653b16a5c6a3d4eaeeb799046b2388cf8d7532d6e9254c4f46b6473/hostname",
	        "HostsPath": "/var/lib/docker/containers/00fb883fa653b16a5c6a3d4eaeeb799046b2388cf8d7532d6e9254c4f46b6473/hosts",
	        "LogPath": "/var/lib/docker/containers/00fb883fa653b16a5c6a3d4eaeeb799046b2388cf8d7532d6e9254c4f46b6473/00fb883fa653b16a5c6a3d4eaeeb799046b2388cf8d7532d6e9254c4f46b6473-json.log",
	        "Name": "/addons-606349",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-606349:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-606349",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c37d11e535a7cad379d63a191e42f295021a6bf1fbb6115a319824a188b5c48b-init/diff:/var/lib/docker/overlay2/287088eb3e5bb39feac9f608f19b8b2d9575f8872ab339d74583c457d8cec343/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c37d11e535a7cad379d63a191e42f295021a6bf1fbb6115a319824a188b5c48b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c37d11e535a7cad379d63a191e42f295021a6bf1fbb6115a319824a188b5c48b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c37d11e535a7cad379d63a191e42f295021a6bf1fbb6115a319824a188b5c48b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-606349",
	                "Source": "/var/lib/docker/volumes/addons-606349/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-606349",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-606349",
	                "name.minikube.sigs.k8s.io": "addons-606349",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "498fa5ba924fef32fe6be2aa7de6a03e13b2d90a4f2fe3fe315ab2f3e4eaa7da",
	            "SandboxKey": "/var/run/docker/netns/498fa5ba924f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34595"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34596"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34599"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34597"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34598"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-606349": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "327cc4c0f93e957099f42f5df5695645a067d1bd5cae73d86f837e1db675491d",
	                    "EndpointID": "8260839433a235518c970e8f26be3c344496d9efb8d679797cca94a5e67a26c4",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-606349",
	                        "00fb883fa653"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-606349 -n addons-606349
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-606349 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-606349 logs -n 25: (1.348384289s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-476882   | jenkins | v1.33.1 | 16 Aug 24 12:25 UTC |                     |
	|         | -p download-only-476882              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.33.1 | 16 Aug 24 12:25 UTC | 16 Aug 24 12:25 UTC |
	| delete  | -p download-only-476882              | download-only-476882   | jenkins | v1.33.1 | 16 Aug 24 12:25 UTC | 16 Aug 24 12:25 UTC |
	| start   | -o=json --download-only              | download-only-639766   | jenkins | v1.33.1 | 16 Aug 24 12:25 UTC |                     |
	|         | -p download-only-639766              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.33.1 | 16 Aug 24 12:25 UTC | 16 Aug 24 12:25 UTC |
	| delete  | -p download-only-639766              | download-only-639766   | jenkins | v1.33.1 | 16 Aug 24 12:25 UTC | 16 Aug 24 12:25 UTC |
	| delete  | -p download-only-476882              | download-only-476882   | jenkins | v1.33.1 | 16 Aug 24 12:25 UTC | 16 Aug 24 12:25 UTC |
	| delete  | -p download-only-639766              | download-only-639766   | jenkins | v1.33.1 | 16 Aug 24 12:25 UTC | 16 Aug 24 12:25 UTC |
	| start   | --download-only -p                   | download-docker-288613 | jenkins | v1.33.1 | 16 Aug 24 12:25 UTC |                     |
	|         | download-docker-288613               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p download-docker-288613            | download-docker-288613 | jenkins | v1.33.1 | 16 Aug 24 12:25 UTC | 16 Aug 24 12:25 UTC |
	| start   | --download-only -p                   | binary-mirror-169699   | jenkins | v1.33.1 | 16 Aug 24 12:25 UTC |                     |
	|         | binary-mirror-169699                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:34739               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-169699              | binary-mirror-169699   | jenkins | v1.33.1 | 16 Aug 24 12:25 UTC | 16 Aug 24 12:25 UTC |
	| addons  | enable dashboard -p                  | addons-606349          | jenkins | v1.33.1 | 16 Aug 24 12:25 UTC |                     |
	|         | addons-606349                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-606349          | jenkins | v1.33.1 | 16 Aug 24 12:25 UTC |                     |
	|         | addons-606349                        |                        |         |         |                     |                     |
	| start   | -p addons-606349 --wait=true         | addons-606349          | jenkins | v1.33.1 | 16 Aug 24 12:25 UTC | 16 Aug 24 12:28 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	| addons  | addons-606349 addons disable         | addons-606349          | jenkins | v1.33.1 | 16 Aug 24 12:28 UTC | 16 Aug 24 12:29 UTC |
	|         | gcp-auth --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| ip      | addons-606349 ip                     | addons-606349          | jenkins | v1.33.1 | 16 Aug 24 12:29 UTC | 16 Aug 24 12:29 UTC |
	| addons  | addons-606349 addons disable         | addons-606349          | jenkins | v1.33.1 | 16 Aug 24 12:29 UTC | 16 Aug 24 12:29 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-606349 addons                 | addons-606349          | jenkins | v1.33.1 | 16 Aug 24 12:29 UTC | 16 Aug 24 12:30 UTC |
	|         | disable csi-hostpath-driver          |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-606349 addons                 | addons-606349          | jenkins | v1.33.1 | 16 Aug 24 12:30 UTC | 16 Aug 24 12:30 UTC |
	|         | disable volumesnapshots              |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-606349          | jenkins | v1.33.1 | 16 Aug 24 12:30 UTC | 16 Aug 24 12:30 UTC |
	|         | addons-606349                        |                        |         |         |                     |                     |
	| ssh     | addons-606349 ssh curl -s            | addons-606349          | jenkins | v1.33.1 | 16 Aug 24 12:30 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:          |                        |         |         |                     |                     |
	|         | nginx.example.com'                   |                        |         |         |                     |                     |
	| ip      | addons-606349 ip                     | addons-606349          | jenkins | v1.33.1 | 16 Aug 24 12:32 UTC | 16 Aug 24 12:32 UTC |
	| addons  | addons-606349 addons disable         | addons-606349          | jenkins | v1.33.1 | 16 Aug 24 12:32 UTC | 16 Aug 24 12:32 UTC |
	|         | ingress-dns --alsologtostderr        |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-606349 addons disable         | addons-606349          | jenkins | v1.33.1 | 16 Aug 24 12:32 UTC | 16 Aug 24 12:32 UTC |
	|         | ingress --alsologtostderr -v=1       |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 12:25:26
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 12:25:26.244246 1387479 out.go:345] Setting OutFile to fd 1 ...
	I0816 12:25:26.244693 1387479 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 12:25:26.244740 1387479 out.go:358] Setting ErrFile to fd 2...
	I0816 12:25:26.244762 1387479 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 12:25:26.245074 1387479 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-1381335/.minikube/bin
	I0816 12:25:26.245612 1387479 out.go:352] Setting JSON to false
	I0816 12:25:26.246574 1387479 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":36470,"bootTime":1723774657,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0816 12:25:26.246688 1387479 start.go:139] virtualization:  
	I0816 12:25:26.249289 1387479 out.go:177] * [addons-606349] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0816 12:25:26.251202 1387479 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 12:25:26.251282 1387479 notify.go:220] Checking for updates...
	I0816 12:25:26.254687 1387479 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 12:25:26.256499 1387479 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-1381335/kubeconfig
	I0816 12:25:26.258399 1387479 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-1381335/.minikube
	I0816 12:25:26.260433 1387479 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0816 12:25:26.262108 1387479 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 12:25:26.264250 1387479 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 12:25:26.285892 1387479 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0816 12:25:26.286019 1387479 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0816 12:25:26.354490 1387479 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-16 12:25:26.344579173 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0816 12:25:26.354607 1387479 docker.go:307] overlay module found
	I0816 12:25:26.356669 1387479 out.go:177] * Using the docker driver based on user configuration
	I0816 12:25:26.358377 1387479 start.go:297] selected driver: docker
	I0816 12:25:26.358392 1387479 start.go:901] validating driver "docker" against <nil>
	I0816 12:25:26.358407 1387479 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 12:25:26.359012 1387479 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0816 12:25:26.409973 1387479 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-16 12:25:26.401233542 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0816 12:25:26.410138 1387479 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 12:25:26.410365 1387479 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 12:25:26.412053 1387479 out.go:177] * Using Docker driver with root privileges
	I0816 12:25:26.413675 1387479 cni.go:84] Creating CNI manager for ""
	I0816 12:25:26.413699 1387479 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0816 12:25:26.413711 1387479 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0816 12:25:26.413839 1387479 start.go:340] cluster config:
	{Name:addons-606349 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-606349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 12:25:26.415811 1387479 out.go:177] * Starting "addons-606349" primary control-plane node in "addons-606349" cluster
	I0816 12:25:26.417573 1387479 cache.go:121] Beginning downloading kic base image for docker with crio
	I0816 12:25:26.419346 1387479 out.go:177] * Pulling base image v0.0.44-1723650208-19443 ...
	I0816 12:25:26.421162 1387479 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 12:25:26.421218 1387479 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-1381335/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4
	I0816 12:25:26.421233 1387479 cache.go:56] Caching tarball of preloaded images
	I0816 12:25:26.421256 1387479 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local docker daemon
	I0816 12:25:26.421317 1387479 preload.go:172] Found /home/jenkins/minikube-integration/19423-1381335/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0816 12:25:26.421327 1387479 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0816 12:25:26.421672 1387479 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/config.json ...
	I0816 12:25:26.421704 1387479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/config.json: {Name:mk0b81af05dcdc24aa88b9fad79390a8f27be4ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:25:26.436293 1387479 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 to local cache
	I0816 12:25:26.436422 1387479 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local cache directory
	I0816 12:25:26.436442 1387479 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local cache directory, skipping pull
	I0816 12:25:26.436447 1387479 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 exists in cache, skipping pull
	I0816 12:25:26.436455 1387479 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 as a tarball
	I0816 12:25:26.436461 1387479 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 from local cache
	I0816 12:25:43.776811 1387479 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 from cached tarball
	I0816 12:25:43.776853 1387479 cache.go:194] Successfully downloaded all kic artifacts
	I0816 12:25:43.776899 1387479 start.go:360] acquireMachinesLock for addons-606349: {Name:mk868a0d8a6549768fa50c40f10f574b8d2ed4ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 12:25:43.777032 1387479 start.go:364] duration metric: took 109.645µs to acquireMachinesLock for "addons-606349"
	I0816 12:25:43.777079 1387479 start.go:93] Provisioning new machine with config: &{Name:addons-606349 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-606349 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 12:25:43.777166 1387479 start.go:125] createHost starting for "" (driver="docker")
	I0816 12:25:43.779574 1387479 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0816 12:25:43.779838 1387479 start.go:159] libmachine.API.Create for "addons-606349" (driver="docker")
	I0816 12:25:43.779877 1387479 client.go:168] LocalClient.Create starting
	I0816 12:25:43.780013 1387479 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19423-1381335/.minikube/certs/ca.pem
	I0816 12:25:44.067271 1387479 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19423-1381335/.minikube/certs/cert.pem
	I0816 12:25:45.207143 1387479 cli_runner.go:164] Run: docker network inspect addons-606349 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0816 12:25:45.226413 1387479 cli_runner.go:211] docker network inspect addons-606349 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0816 12:25:45.226525 1387479 network_create.go:284] running [docker network inspect addons-606349] to gather additional debugging logs...
	I0816 12:25:45.226552 1387479 cli_runner.go:164] Run: docker network inspect addons-606349
	W0816 12:25:45.244182 1387479 cli_runner.go:211] docker network inspect addons-606349 returned with exit code 1
	I0816 12:25:45.244224 1387479 network_create.go:287] error running [docker network inspect addons-606349]: docker network inspect addons-606349: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-606349 not found
	I0816 12:25:45.244239 1387479 network_create.go:289] output of [docker network inspect addons-606349]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-606349 not found
	
	** /stderr **
	I0816 12:25:45.244358 1387479 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0816 12:25:45.264463 1387479 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40017b8890}
	I0816 12:25:45.264518 1387479 network_create.go:124] attempt to create docker network addons-606349 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0816 12:25:45.264596 1387479 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-606349 addons-606349
	I0816 12:25:45.366072 1387479 network_create.go:108] docker network addons-606349 192.168.49.0/24 created
	I0816 12:25:45.366122 1387479 kic.go:121] calculated static IP "192.168.49.2" for the "addons-606349" container
	I0816 12:25:45.366229 1387479 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0816 12:25:45.384830 1387479 cli_runner.go:164] Run: docker volume create addons-606349 --label name.minikube.sigs.k8s.io=addons-606349 --label created_by.minikube.sigs.k8s.io=true
	I0816 12:25:45.410123 1387479 oci.go:103] Successfully created a docker volume addons-606349
	I0816 12:25:45.410339 1387479 cli_runner.go:164] Run: docker run --rm --name addons-606349-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-606349 --entrypoint /usr/bin/test -v addons-606349:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 -d /var/lib
	I0816 12:25:47.526836 1387479 cli_runner.go:217] Completed: docker run --rm --name addons-606349-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-606349 --entrypoint /usr/bin/test -v addons-606349:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 -d /var/lib: (2.1164221s)
	I0816 12:25:47.526865 1387479 oci.go:107] Successfully prepared a docker volume addons-606349
	I0816 12:25:47.526885 1387479 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 12:25:47.526905 1387479 kic.go:194] Starting extracting preloaded images to volume ...
	I0816 12:25:47.526984 1387479 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19423-1381335/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-606349:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 -I lz4 -xf /preloaded.tar -C /extractDir
	I0816 12:25:51.585348 1387479 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19423-1381335/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-606349:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 -I lz4 -xf /preloaded.tar -C /extractDir: (4.058321851s)
	I0816 12:25:51.585381 1387479 kic.go:203] duration metric: took 4.058473184s to extract preloaded images to volume ...
	W0816 12:25:51.585513 1387479 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0816 12:25:51.585640 1387479 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0816 12:25:51.642591 1387479 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-606349 --name addons-606349 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-606349 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-606349 --network addons-606349 --ip 192.168.49.2 --volume addons-606349:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002
	I0816 12:25:51.971355 1387479 cli_runner.go:164] Run: docker container inspect addons-606349 --format={{.State.Running}}
	I0816 12:25:51.990595 1387479 cli_runner.go:164] Run: docker container inspect addons-606349 --format={{.State.Status}}
	I0816 12:25:52.016857 1387479 cli_runner.go:164] Run: docker exec addons-606349 stat /var/lib/dpkg/alternatives/iptables
	I0816 12:25:52.086314 1387479 oci.go:144] the created container "addons-606349" has a running status.
	I0816 12:25:52.086346 1387479 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19423-1381335/.minikube/machines/addons-606349/id_rsa...
	I0816 12:25:52.548884 1387479 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19423-1381335/.minikube/machines/addons-606349/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0816 12:25:52.589414 1387479 cli_runner.go:164] Run: docker container inspect addons-606349 --format={{.State.Status}}
	I0816 12:25:52.631974 1387479 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0816 12:25:52.631993 1387479 kic_runner.go:114] Args: [docker exec --privileged addons-606349 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0816 12:25:52.729928 1387479 cli_runner.go:164] Run: docker container inspect addons-606349 --format={{.State.Status}}
	I0816 12:25:52.748168 1387479 machine.go:93] provisionDockerMachine start ...
	I0816 12:25:52.748290 1387479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606349
	I0816 12:25:52.769078 1387479 main.go:141] libmachine: Using SSH client type: native
	I0816 12:25:52.769442 1387479 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 34595 <nil> <nil>}
	I0816 12:25:52.769461 1387479 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 12:25:52.925415 1387479 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-606349
	
	I0816 12:25:52.925442 1387479 ubuntu.go:169] provisioning hostname "addons-606349"
	I0816 12:25:52.925511 1387479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606349
	I0816 12:25:52.947337 1387479 main.go:141] libmachine: Using SSH client type: native
	I0816 12:25:52.947592 1387479 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 34595 <nil> <nil>}
	I0816 12:25:52.947604 1387479 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-606349 && echo "addons-606349" | sudo tee /etc/hostname
	I0816 12:25:53.113126 1387479 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-606349
	
	I0816 12:25:53.113265 1387479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606349
	I0816 12:25:53.131666 1387479 main.go:141] libmachine: Using SSH client type: native
	I0816 12:25:53.131905 1387479 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 34595 <nil> <nil>}
	I0816 12:25:53.131922 1387479 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-606349' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-606349/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-606349' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 12:25:53.269843 1387479 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 12:25:53.269874 1387479 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19423-1381335/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-1381335/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-1381335/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-1381335/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-1381335/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-1381335/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-1381335/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-1381335/.minikube}
	I0816 12:25:53.269911 1387479 ubuntu.go:177] setting up certificates
	I0816 12:25:53.269921 1387479 provision.go:84] configureAuth start
	I0816 12:25:53.269984 1387479 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-606349
	I0816 12:25:53.287073 1387479 provision.go:143] copyHostCerts
	I0816 12:25:53.287162 1387479 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-1381335/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-1381335/.minikube/key.pem (1679 bytes)
	I0816 12:25:53.287292 1387479 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-1381335/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-1381335/.minikube/ca.pem (1078 bytes)
	I0816 12:25:53.287359 1387479 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-1381335/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-1381335/.minikube/cert.pem (1123 bytes)
	I0816 12:25:53.287413 1387479 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-1381335/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-1381335/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-1381335/.minikube/certs/ca-key.pem org=jenkins.addons-606349 san=[127.0.0.1 192.168.49.2 addons-606349 localhost minikube]
	I0816 12:25:55.008806 1387479 provision.go:177] copyRemoteCerts
	I0816 12:25:55.008898 1387479 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 12:25:55.008952 1387479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606349
	I0816 12:25:55.030478 1387479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34595 SSHKeyPath:/home/jenkins/minikube-integration/19423-1381335/.minikube/machines/addons-606349/id_rsa Username:docker}
	I0816 12:25:55.127227 1387479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1381335/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 12:25:55.153005 1387479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1381335/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0816 12:25:55.179004 1387479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1381335/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 12:25:55.204145 1387479 provision.go:87] duration metric: took 1.93420761s to configureAuth
	I0816 12:25:55.204175 1387479 ubuntu.go:193] setting minikube options for container-runtime
	I0816 12:25:55.204368 1387479 config.go:182] Loaded profile config "addons-606349": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 12:25:55.204484 1387479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606349
	I0816 12:25:55.221601 1387479 main.go:141] libmachine: Using SSH client type: native
	I0816 12:25:55.221867 1387479 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 34595 <nil> <nil>}
	I0816 12:25:55.221890 1387479 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 12:25:55.457148 1387479 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 12:25:55.457175 1387479 machine.go:96] duration metric: took 2.708984616s to provisionDockerMachine
	I0816 12:25:55.457186 1387479 client.go:171] duration metric: took 11.677299475s to LocalClient.Create
	I0816 12:25:55.457199 1387479 start.go:167] duration metric: took 11.677363294s to libmachine.API.Create "addons-606349"
	I0816 12:25:55.457208 1387479 start.go:293] postStartSetup for "addons-606349" (driver="docker")
	I0816 12:25:55.457218 1387479 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 12:25:55.457286 1387479 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 12:25:55.457332 1387479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606349
	I0816 12:25:55.474153 1387479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34595 SSHKeyPath:/home/jenkins/minikube-integration/19423-1381335/.minikube/machines/addons-606349/id_rsa Username:docker}
	I0816 12:25:55.571167 1387479 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 12:25:55.574544 1387479 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0816 12:25:55.574583 1387479 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0816 12:25:55.574594 1387479 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0816 12:25:55.574601 1387479 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0816 12:25:55.574612 1387479 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-1381335/.minikube/addons for local assets ...
	I0816 12:25:55.574688 1387479 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-1381335/.minikube/files for local assets ...
	I0816 12:25:55.574713 1387479 start.go:296] duration metric: took 117.500173ms for postStartSetup
	I0816 12:25:55.575035 1387479 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-606349
	I0816 12:25:55.590591 1387479 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/config.json ...
	I0816 12:25:55.590894 1387479 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 12:25:55.590946 1387479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606349
	I0816 12:25:55.606849 1387479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34595 SSHKeyPath:/home/jenkins/minikube-integration/19423-1381335/.minikube/machines/addons-606349/id_rsa Username:docker}
	I0816 12:25:55.698649 1387479 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0816 12:25:55.703305 1387479 start.go:128] duration metric: took 11.926122525s to createHost
	I0816 12:25:55.703332 1387479 start.go:83] releasing machines lock for "addons-606349", held for 11.926285896s
	I0816 12:25:55.703446 1387479 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-606349
	I0816 12:25:55.720930 1387479 ssh_runner.go:195] Run: cat /version.json
	I0816 12:25:55.720992 1387479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606349
	I0816 12:25:55.721253 1387479 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 12:25:55.721302 1387479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606349
	I0816 12:25:55.746420 1387479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34595 SSHKeyPath:/home/jenkins/minikube-integration/19423-1381335/.minikube/machines/addons-606349/id_rsa Username:docker}
	I0816 12:25:55.760655 1387479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34595 SSHKeyPath:/home/jenkins/minikube-integration/19423-1381335/.minikube/machines/addons-606349/id_rsa Username:docker}
	I0816 12:25:55.841641 1387479 ssh_runner.go:195] Run: systemctl --version
	I0816 12:25:55.967182 1387479 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 12:25:56.113965 1387479 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0816 12:25:56.118239 1387479 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 12:25:56.138549 1387479 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0816 12:25:56.138624 1387479 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 12:25:56.173080 1387479 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0816 12:25:56.173106 1387479 start.go:495] detecting cgroup driver to use...
	I0816 12:25:56.173139 1387479 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0816 12:25:56.173190 1387479 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 12:25:56.190762 1387479 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 12:25:56.202960 1387479 docker.go:217] disabling cri-docker service (if available) ...
	I0816 12:25:56.203027 1387479 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 12:25:56.217925 1387479 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 12:25:56.233459 1387479 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 12:25:56.334115 1387479 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 12:25:56.430610 1387479 docker.go:233] disabling docker service ...
	I0816 12:25:56.430701 1387479 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 12:25:56.451721 1387479 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 12:25:56.464002 1387479 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 12:25:56.551221 1387479 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 12:25:56.651451 1387479 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 12:25:56.663024 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 12:25:56.679596 1387479 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 12:25:56.679665 1387479 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:25:56.691059 1387479 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 12:25:56.691171 1387479 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:25:56.700961 1387479 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:25:56.710933 1387479 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:25:56.720867 1387479 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 12:25:56.730366 1387479 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:25:56.740168 1387479 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:25:56.755971 1387479 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:25:56.766391 1387479 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 12:25:56.775490 1387479 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 12:25:56.784554 1387479 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 12:25:56.863365 1387479 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 12:25:56.988848 1387479 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 12:25:56.988997 1387479 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 12:25:56.992514 1387479 start.go:563] Will wait 60s for crictl version
	I0816 12:25:56.992583 1387479 ssh_runner.go:195] Run: which crictl
	I0816 12:25:56.995969 1387479 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 12:25:57.038886 1387479 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0816 12:25:57.039007 1387479 ssh_runner.go:195] Run: crio --version
	I0816 12:25:57.081443 1387479 ssh_runner.go:195] Run: crio --version
	I0816 12:25:57.122218 1387479 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.24.6 ...
	I0816 12:25:57.124051 1387479 cli_runner.go:164] Run: docker network inspect addons-606349 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0816 12:25:57.140005 1387479 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0816 12:25:57.143844 1387479 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 12:25:57.155902 1387479 kubeadm.go:883] updating cluster {Name:addons-606349 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-606349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 12:25:57.156024 1387479 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 12:25:57.156085 1387479 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 12:25:57.232976 1387479 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 12:25:57.233003 1387479 crio.go:433] Images already preloaded, skipping extraction
	I0816 12:25:57.233066 1387479 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 12:25:57.269605 1387479 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 12:25:57.269632 1387479 cache_images.go:84] Images are preloaded, skipping loading
	I0816 12:25:57.269641 1387479 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 crio true true} ...
	I0816 12:25:57.269821 1387479 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-606349 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-606349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 12:25:57.269923 1387479 ssh_runner.go:195] Run: crio config
	I0816 12:25:57.317966 1387479 cni.go:84] Creating CNI manager for ""
	I0816 12:25:57.317987 1387479 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0816 12:25:57.317997 1387479 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 12:25:57.318047 1387479 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-606349 NodeName:addons-606349 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 12:25:57.318222 1387479 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-606349"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 12:25:57.318300 1387479 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 12:25:57.327302 1387479 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 12:25:57.327399 1387479 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 12:25:57.336191 1387479 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0816 12:25:57.354460 1387479 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 12:25:57.373469 1387479 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0816 12:25:57.391630 1387479 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0816 12:25:57.394991 1387479 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 12:25:57.405830 1387479 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 12:25:57.487288 1387479 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 12:25:57.501171 1387479 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349 for IP: 192.168.49.2
	I0816 12:25:57.501196 1387479 certs.go:194] generating shared ca certs ...
	I0816 12:25:57.501241 1387479 certs.go:226] acquiring lock for ca certs: {Name:mkdf245990f96a1e9a969aa18ae3f00f60af8904 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:25:57.501406 1387479 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-1381335/.minikube/ca.key
	I0816 12:25:57.773948 1387479 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-1381335/.minikube/ca.crt ...
	I0816 12:25:57.773981 1387479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1381335/.minikube/ca.crt: {Name:mk0abc725d07af006b1bd80999d9cb74372c95a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:25:57.774187 1387479 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-1381335/.minikube/ca.key ...
	I0816 12:25:57.774202 1387479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1381335/.minikube/ca.key: {Name:mk244103f56694344cc7fa24fc8b304dd5ded8a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:25:57.774807 1387479 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-1381335/.minikube/proxy-client-ca.key
	I0816 12:25:58.658106 1387479 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-1381335/.minikube/proxy-client-ca.crt ...
	I0816 12:25:58.658141 1387479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1381335/.minikube/proxy-client-ca.crt: {Name:mk329aa97becc0d5b2bd470a4f80d695baf7cc9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:25:58.658336 1387479 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-1381335/.minikube/proxy-client-ca.key ...
	I0816 12:25:58.658349 1387479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1381335/.minikube/proxy-client-ca.key: {Name:mk0054d6804513c813fbc7c8345ac7f5a155ba89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:25:58.658830 1387479 certs.go:256] generating profile certs ...
	I0816 12:25:58.658898 1387479 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/client.key
	I0816 12:25:58.658916 1387479 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/client.crt with IP's: []
	I0816 12:25:58.873418 1387479 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/client.crt ...
	I0816 12:25:58.873451 1387479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/client.crt: {Name:mkf34b318a06ff1a691f707ba7f1efe691343c60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:25:58.874127 1387479 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/client.key ...
	I0816 12:25:58.874144 1387479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/client.key: {Name:mk360039aca615933913e2216c678df67c9fd603 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:25:58.874868 1387479 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/apiserver.key.585d6f6a
	I0816 12:25:58.874891 1387479 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/apiserver.crt.585d6f6a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0816 12:25:59.306258 1387479 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/apiserver.crt.585d6f6a ...
	I0816 12:25:59.306293 1387479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/apiserver.crt.585d6f6a: {Name:mk9714ef2bb629be7900e291b21c0af1c17e99df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:25:59.307020 1387479 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/apiserver.key.585d6f6a ...
	I0816 12:25:59.307040 1387479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/apiserver.key.585d6f6a: {Name:mk2054854d323378f4639c6fb7f0e7448b862005 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:25:59.307467 1387479 certs.go:381] copying /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/apiserver.crt.585d6f6a -> /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/apiserver.crt
	I0816 12:25:59.307565 1387479 certs.go:385] copying /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/apiserver.key.585d6f6a -> /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/apiserver.key
	I0816 12:25:59.307624 1387479 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/proxy-client.key
	I0816 12:25:59.307646 1387479 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/proxy-client.crt with IP's: []
	I0816 12:25:59.691435 1387479 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/proxy-client.crt ...
	I0816 12:25:59.691471 1387479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/proxy-client.crt: {Name:mk54d74acf5e459a95168204396bdfebf4a6453e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:25:59.692043 1387479 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/proxy-client.key ...
	I0816 12:25:59.692064 1387479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/proxy-client.key: {Name:mkcd3b978b0fa1d409c8422bf4b5e9571781fd00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:25:59.692771 1387479 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-1381335/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 12:25:59.692843 1387479 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-1381335/.minikube/certs/ca.pem (1078 bytes)
	I0816 12:25:59.692878 1387479 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-1381335/.minikube/certs/cert.pem (1123 bytes)
	I0816 12:25:59.692920 1387479 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-1381335/.minikube/certs/key.pem (1679 bytes)
	I0816 12:25:59.693547 1387479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1381335/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 12:25:59.719815 1387479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1381335/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 12:25:59.745920 1387479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1381335/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 12:25:59.770057 1387479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1381335/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 12:25:59.795508 1387479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0816 12:25:59.822319 1387479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 12:25:59.847247 1387479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 12:25:59.872299 1387479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 12:25:59.899440 1387479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1381335/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 12:25:59.925192 1387479 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 12:25:59.943836 1387479 ssh_runner.go:195] Run: openssl version
	I0816 12:25:59.949518 1387479 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 12:25:59.959204 1387479 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 12:25:59.962932 1387479 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 12:25 /usr/share/ca-certificates/minikubeCA.pem
	I0816 12:25:59.962997 1387479 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 12:25:59.970252 1387479 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 12:25:59.979696 1387479 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 12:25:59.983157 1387479 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0816 12:25:59.983207 1387479 kubeadm.go:392] StartCluster: {Name:addons-606349 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-606349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 12:25:59.983297 1387479 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 12:25:59.983363 1387479 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 12:26:00.115353 1387479 cri.go:89] found id: ""
	I0816 12:26:00.115446 1387479 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 12:26:00.175288 1387479 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 12:26:00.199801 1387479 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0816 12:26:00.199885 1387479 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 12:26:00.245956 1387479 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 12:26:00.245975 1387479 kubeadm.go:157] found existing configuration files:
	
	I0816 12:26:00.246048 1387479 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 12:26:00.278254 1387479 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 12:26:00.278326 1387479 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 12:26:00.312259 1387479 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 12:26:00.352452 1387479 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 12:26:00.352540 1387479 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 12:26:00.382237 1387479 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 12:26:00.413887 1387479 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 12:26:00.413959 1387479 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 12:26:00.425369 1387479 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 12:26:00.436805 1387479 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 12:26:00.436893 1387479 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 12:26:00.447577 1387479 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0816 12:26:00.495011 1387479 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0816 12:26:00.495462 1387479 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 12:26:00.534487 1387479 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0816 12:26:00.534609 1387479 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1067-aws
	I0816 12:26:00.534649 1387479 kubeadm.go:310] OS: Linux
	I0816 12:26:00.534698 1387479 kubeadm.go:310] CGROUPS_CPU: enabled
	I0816 12:26:00.534772 1387479 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0816 12:26:00.534825 1387479 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0816 12:26:00.534873 1387479 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0816 12:26:00.534924 1387479 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0816 12:26:00.534978 1387479 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0816 12:26:00.535031 1387479 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0816 12:26:00.535082 1387479 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0816 12:26:00.535132 1387479 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0816 12:26:00.609032 1387479 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 12:26:00.609143 1387479 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 12:26:00.609238 1387479 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0816 12:26:00.617368 1387479 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 12:26:00.620808 1387479 out.go:235]   - Generating certificates and keys ...
	I0816 12:26:00.620930 1387479 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 12:26:00.621044 1387479 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 12:26:02.014464 1387479 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0816 12:26:02.544117 1387479 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0816 12:26:02.970050 1387479 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0816 12:26:03.178531 1387479 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0816 12:26:03.475097 1387479 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0816 12:26:03.475384 1387479 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-606349 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0816 12:26:03.846624 1387479 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0816 12:26:03.846844 1387479 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-606349 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0816 12:26:04.314895 1387479 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0816 12:26:05.128042 1387479 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0816 12:26:05.859132 1387479 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0816 12:26:05.859611 1387479 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 12:26:06.911669 1387479 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 12:26:07.106803 1387479 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0816 12:26:07.803204 1387479 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 12:26:08.245881 1387479 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 12:26:08.647671 1387479 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 12:26:08.648387 1387479 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 12:26:08.651451 1387479 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 12:26:08.653660 1387479 out.go:235]   - Booting up control plane ...
	I0816 12:26:08.653774 1387479 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 12:26:08.653850 1387479 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 12:26:08.656007 1387479 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 12:26:08.666103 1387479 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 12:26:08.672626 1387479 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 12:26:08.672882 1387479 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 12:26:08.771618 1387479 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0816 12:26:08.771740 1387479 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0816 12:26:10.773496 1387479 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 2.001941203s
	I0816 12:26:10.773583 1387479 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0816 12:26:16.274804 1387479 kubeadm.go:310] [api-check] The API server is healthy after 5.501278399s
	I0816 12:26:16.299231 1387479 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0816 12:26:16.315289 1387479 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0816 12:26:16.343256 1387479 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0816 12:26:16.343451 1387479 kubeadm.go:310] [mark-control-plane] Marking the node addons-606349 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0816 12:26:16.354524 1387479 kubeadm.go:310] [bootstrap-token] Using token: 1vr55b.ts8mrotbuaenwvy3
	I0816 12:26:16.356352 1387479 out.go:235]   - Configuring RBAC rules ...
	I0816 12:26:16.356487 1387479 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0816 12:26:16.362891 1387479 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0816 12:26:16.371196 1387479 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0816 12:26:16.375049 1387479 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0816 12:26:16.378639 1387479 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0816 12:26:16.383291 1387479 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0816 12:26:16.684409 1387479 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0816 12:26:17.130758 1387479 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0816 12:26:17.684856 1387479 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0816 12:26:17.684938 1387479 kubeadm.go:310] 
	I0816 12:26:17.685018 1387479 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0816 12:26:17.685025 1387479 kubeadm.go:310] 
	I0816 12:26:17.685119 1387479 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0816 12:26:17.685125 1387479 kubeadm.go:310] 
	I0816 12:26:17.685162 1387479 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0816 12:26:17.685221 1387479 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0816 12:26:17.685280 1387479 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0816 12:26:17.685298 1387479 kubeadm.go:310] 
	I0816 12:26:17.685367 1387479 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0816 12:26:17.685381 1387479 kubeadm.go:310] 
	I0816 12:26:17.685444 1387479 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0816 12:26:17.685452 1387479 kubeadm.go:310] 
	I0816 12:26:17.685503 1387479 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0816 12:26:17.685592 1387479 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0816 12:26:17.685677 1387479 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0816 12:26:17.685693 1387479 kubeadm.go:310] 
	I0816 12:26:17.685827 1387479 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0816 12:26:17.685919 1387479 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0816 12:26:17.685934 1387479 kubeadm.go:310] 
	I0816 12:26:17.686022 1387479 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 1vr55b.ts8mrotbuaenwvy3 \
	I0816 12:26:17.686160 1387479 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f9e7c8c29c13fd1e89c944beb24d85c1145fec055b6164d87d49cd9cc484240a \
	I0816 12:26:17.686188 1387479 kubeadm.go:310] 	--control-plane 
	I0816 12:26:17.686196 1387479 kubeadm.go:310] 
	I0816 12:26:17.686298 1387479 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0816 12:26:17.686309 1387479 kubeadm.go:310] 
	I0816 12:26:17.686413 1387479 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 1vr55b.ts8mrotbuaenwvy3 \
	I0816 12:26:17.686554 1387479 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f9e7c8c29c13fd1e89c944beb24d85c1145fec055b6164d87d49cd9cc484240a 
	I0816 12:26:17.690142 1387479 kubeadm.go:310] W0816 12:26:00.489871    1177 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 12:26:17.690432 1387479 kubeadm.go:310] W0816 12:26:00.491748    1177 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 12:26:17.690644 1387479 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1067-aws\n", err: exit status 1
	I0816 12:26:17.690744 1387479 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 12:26:17.690769 1387479 cni.go:84] Creating CNI manager for ""
	I0816 12:26:17.690781 1387479 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0816 12:26:17.694115 1387479 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0816 12:26:17.695952 1387479 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0816 12:26:17.700275 1387479 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0816 12:26:17.700308 1387479 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0816 12:26:17.721894 1387479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0816 12:26:18.020373 1387479 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 12:26:18.020542 1387479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 12:26:18.020640 1387479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-606349 minikube.k8s.io/updated_at=2024_08_16T12_26_18_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ab84f9bc76071a77c857a14f5c66dccc01002b05 minikube.k8s.io/name=addons-606349 minikube.k8s.io/primary=true
	I0816 12:26:18.221540 1387479 ops.go:34] apiserver oom_adj: -16
	I0816 12:26:18.221638 1387479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 12:26:18.722438 1387479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 12:26:19.221726 1387479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 12:26:19.722083 1387479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 12:26:20.222043 1387479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 12:26:20.722263 1387479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 12:26:21.221829 1387479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 12:26:21.721795 1387479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 12:26:21.815534 1387479 kubeadm.go:1113] duration metric: took 3.795066848s to wait for elevateKubeSystemPrivileges
	I0816 12:26:21.815570 1387479 kubeadm.go:394] duration metric: took 21.832366761s to StartCluster
	I0816 12:26:21.815588 1387479 settings.go:142] acquiring lock: {Name:mk061dbb4361ece7e549334669d8986f48680b2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:26:21.815719 1387479 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-1381335/kubeconfig
	I0816 12:26:21.816191 1387479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1381335/kubeconfig: {Name:mk5d80d953866a4dbf0a0227ebebea809a97d7a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:26:21.816957 1387479 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 12:26:21.817096 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 12:26:21.817357 1387479 config.go:182] Loaded profile config "addons-606349": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 12:26:21.817396 1387479 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0816 12:26:21.817479 1387479 addons.go:69] Setting yakd=true in profile "addons-606349"
	I0816 12:26:21.817504 1387479 addons.go:234] Setting addon yakd=true in "addons-606349"
	I0816 12:26:21.817532 1387479 host.go:66] Checking if "addons-606349" exists ...
	I0816 12:26:21.818071 1387479 cli_runner.go:164] Run: docker container inspect addons-606349 --format={{.State.Status}}
	I0816 12:26:21.818303 1387479 addons.go:69] Setting inspektor-gadget=true in profile "addons-606349"
	I0816 12:26:21.818329 1387479 addons.go:234] Setting addon inspektor-gadget=true in "addons-606349"
	I0816 12:26:21.818353 1387479 host.go:66] Checking if "addons-606349" exists ...
	I0816 12:26:21.818751 1387479 cli_runner.go:164] Run: docker container inspect addons-606349 --format={{.State.Status}}
	I0816 12:26:21.819138 1387479 addons.go:69] Setting metrics-server=true in profile "addons-606349"
	I0816 12:26:21.819168 1387479 addons.go:234] Setting addon metrics-server=true in "addons-606349"
	I0816 12:26:21.819193 1387479 host.go:66] Checking if "addons-606349" exists ...
	I0816 12:26:21.819585 1387479 cli_runner.go:164] Run: docker container inspect addons-606349 --format={{.State.Status}}
	I0816 12:26:21.819896 1387479 addons.go:69] Setting cloud-spanner=true in profile "addons-606349"
	I0816 12:26:21.819931 1387479 addons.go:234] Setting addon cloud-spanner=true in "addons-606349"
	I0816 12:26:21.819968 1387479 host.go:66] Checking if "addons-606349" exists ...
	I0816 12:26:21.820372 1387479 cli_runner.go:164] Run: docker container inspect addons-606349 --format={{.State.Status}}
	I0816 12:26:21.820537 1387479 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-606349"
	I0816 12:26:21.820564 1387479 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-606349"
	I0816 12:26:21.820600 1387479 host.go:66] Checking if "addons-606349" exists ...
	I0816 12:26:21.820979 1387479 cli_runner.go:164] Run: docker container inspect addons-606349 --format={{.State.Status}}
	I0816 12:26:21.826362 1387479 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-606349"
	I0816 12:26:21.826444 1387479 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-606349"
	I0816 12:26:21.826482 1387479 host.go:66] Checking if "addons-606349" exists ...
	I0816 12:26:21.826947 1387479 cli_runner.go:164] Run: docker container inspect addons-606349 --format={{.State.Status}}
	I0816 12:26:21.841953 1387479 addons.go:69] Setting registry=true in profile "addons-606349"
	I0816 12:26:21.841999 1387479 addons.go:234] Setting addon registry=true in "addons-606349"
	I0816 12:26:21.842038 1387479 host.go:66] Checking if "addons-606349" exists ...
	I0816 12:26:21.842517 1387479 cli_runner.go:164] Run: docker container inspect addons-606349 --format={{.State.Status}}
	I0816 12:26:21.845976 1387479 addons.go:69] Setting default-storageclass=true in profile "addons-606349"
	I0816 12:26:21.846028 1387479 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-606349"
	I0816 12:26:21.846341 1387479 cli_runner.go:164] Run: docker container inspect addons-606349 --format={{.State.Status}}
	I0816 12:26:21.861270 1387479 addons.go:69] Setting storage-provisioner=true in profile "addons-606349"
	I0816 12:26:21.861316 1387479 addons.go:234] Setting addon storage-provisioner=true in "addons-606349"
	I0816 12:26:21.861354 1387479 host.go:66] Checking if "addons-606349" exists ...
	I0816 12:26:21.861855 1387479 cli_runner.go:164] Run: docker container inspect addons-606349 --format={{.State.Status}}
	I0816 12:26:21.862020 1387479 addons.go:69] Setting gcp-auth=true in profile "addons-606349"
	I0816 12:26:21.862049 1387479 mustload.go:65] Loading cluster: addons-606349
	I0816 12:26:21.862202 1387479 config.go:182] Loaded profile config "addons-606349": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 12:26:21.862416 1387479 cli_runner.go:164] Run: docker container inspect addons-606349 --format={{.State.Status}}
	I0816 12:26:21.881402 1387479 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-606349"
	I0816 12:26:21.881438 1387479 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-606349"
	I0816 12:26:21.881781 1387479 cli_runner.go:164] Run: docker container inspect addons-606349 --format={{.State.Status}}
	I0816 12:26:21.902129 1387479 addons.go:69] Setting ingress=true in profile "addons-606349"
	I0816 12:26:21.902176 1387479 addons.go:234] Setting addon ingress=true in "addons-606349"
	I0816 12:26:21.902224 1387479 host.go:66] Checking if "addons-606349" exists ...
	I0816 12:26:21.902724 1387479 cli_runner.go:164] Run: docker container inspect addons-606349 --format={{.State.Status}}
	I0816 12:26:21.902988 1387479 addons.go:69] Setting volcano=true in profile "addons-606349"
	I0816 12:26:21.903057 1387479 addons.go:234] Setting addon volcano=true in "addons-606349"
	I0816 12:26:21.903121 1387479 host.go:66] Checking if "addons-606349" exists ...
	I0816 12:26:21.910931 1387479 cli_runner.go:164] Run: docker container inspect addons-606349 --format={{.State.Status}}
	I0816 12:26:21.925344 1387479 addons.go:69] Setting ingress-dns=true in profile "addons-606349"
	I0816 12:26:21.925389 1387479 addons.go:234] Setting addon ingress-dns=true in "addons-606349"
	I0816 12:26:21.925436 1387479 host.go:66] Checking if "addons-606349" exists ...
	I0816 12:26:21.926005 1387479 cli_runner.go:164] Run: docker container inspect addons-606349 --format={{.State.Status}}
	I0816 12:26:21.930186 1387479 out.go:177] * Verifying Kubernetes components...
	I0816 12:26:21.936696 1387479 addons.go:69] Setting volumesnapshots=true in profile "addons-606349"
	I0816 12:26:21.936736 1387479 addons.go:234] Setting addon volumesnapshots=true in "addons-606349"
	I0816 12:26:21.936791 1387479 host.go:66] Checking if "addons-606349" exists ...
	I0816 12:26:21.937385 1387479 cli_runner.go:164] Run: docker container inspect addons-606349 --format={{.State.Status}}
	I0816 12:26:21.981496 1387479 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 12:26:21.985040 1387479 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0816 12:26:21.991719 1387479 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0816 12:26:21.991804 1387479 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 12:26:21.991815 1387479 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 12:26:21.991887 1387479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606349
	I0816 12:26:21.992483 1387479 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0816 12:26:21.995593 1387479 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0816 12:26:21.995622 1387479 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0816 12:26:21.995699 1387479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606349
	I0816 12:26:22.003893 1387479 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0816 12:26:22.004037 1387479 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0816 12:26:22.007477 1387479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606349
	I0816 12:26:22.030799 1387479 out.go:177]   - Using image docker.io/registry:2.8.3
	I0816 12:26:22.034095 1387479 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0816 12:26:22.034304 1387479 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0816 12:26:22.035931 1387479 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0816 12:26:22.035953 1387479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0816 12:26:22.036028 1387479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606349
	I0816 12:26:22.036295 1387479 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0816 12:26:22.036309 1387479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0816 12:26:22.036351 1387479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606349
	I0816 12:26:22.067930 1387479 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0816 12:26:22.069687 1387479 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0816 12:26:22.069711 1387479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0816 12:26:22.069802 1387479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606349
	I0816 12:26:22.079848 1387479 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0816 12:26:22.082040 1387479 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0816 12:26:22.083812 1387479 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0816 12:26:22.085628 1387479 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0816 12:26:22.087600 1387479 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0816 12:26:22.089605 1387479 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0816 12:26:22.091940 1387479 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0816 12:26:22.093792 1387479 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0816 12:26:22.096326 1387479 addons.go:234] Setting addon default-storageclass=true in "addons-606349"
	I0816 12:26:22.096373 1387479 host.go:66] Checking if "addons-606349" exists ...
	I0816 12:26:22.096836 1387479 cli_runner.go:164] Run: docker container inspect addons-606349 --format={{.State.Status}}
	I0816 12:26:22.103530 1387479 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0816 12:26:22.103578 1387479 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0816 12:26:22.103654 1387479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606349
	I0816 12:26:22.133357 1387479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34595 SSHKeyPath:/home/jenkins/minikube-integration/19423-1381335/.minikube/machines/addons-606349/id_rsa Username:docker}
	I0816 12:26:22.187267 1387479 host.go:66] Checking if "addons-606349" exists ...
	I0816 12:26:22.195638 1387479 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 12:26:22.195879 1387479 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0816 12:26:22.197472 1387479 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 12:26:22.197492 1387479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 12:26:22.197559 1387479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606349
	I0816 12:26:22.199477 1387479 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0816 12:26:22.201185 1387479 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0816 12:26:22.203509 1387479 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0816 12:26:22.203530 1387479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0816 12:26:22.203596 1387479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606349
	I0816 12:26:22.220317 1387479 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-606349"
	I0816 12:26:22.220363 1387479 host.go:66] Checking if "addons-606349" exists ...
	I0816 12:26:22.220823 1387479 cli_runner.go:164] Run: docker container inspect addons-606349 --format={{.State.Status}}
	W0816 12:26:22.221172 1387479 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0816 12:26:22.237432 1387479 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0816 12:26:22.239157 1387479 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0816 12:26:22.239185 1387479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0816 12:26:22.239261 1387479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606349
	I0816 12:26:22.250996 1387479 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0816 12:26:22.251305 1387479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34595 SSHKeyPath:/home/jenkins/minikube-integration/19423-1381335/.minikube/machines/addons-606349/id_rsa Username:docker}
	I0816 12:26:22.254652 1387479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34595 SSHKeyPath:/home/jenkins/minikube-integration/19423-1381335/.minikube/machines/addons-606349/id_rsa Username:docker}
	I0816 12:26:22.255441 1387479 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0816 12:26:22.255461 1387479 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0816 12:26:22.255536 1387479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606349
	I0816 12:26:22.269013 1387479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34595 SSHKeyPath:/home/jenkins/minikube-integration/19423-1381335/.minikube/machines/addons-606349/id_rsa Username:docker}
	I0816 12:26:22.285882 1387479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34595 SSHKeyPath:/home/jenkins/minikube-integration/19423-1381335/.minikube/machines/addons-606349/id_rsa Username:docker}
	I0816 12:26:22.342209 1387479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34595 SSHKeyPath:/home/jenkins/minikube-integration/19423-1381335/.minikube/machines/addons-606349/id_rsa Username:docker}
	I0816 12:26:22.351221 1387479 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 12:26:22.351242 1387479 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 12:26:22.351302 1387479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606349
	I0816 12:26:22.369269 1387479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34595 SSHKeyPath:/home/jenkins/minikube-integration/19423-1381335/.minikube/machines/addons-606349/id_rsa Username:docker}
	I0816 12:26:22.393893 1387479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34595 SSHKeyPath:/home/jenkins/minikube-integration/19423-1381335/.minikube/machines/addons-606349/id_rsa Username:docker}
	I0816 12:26:22.404561 1387479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34595 SSHKeyPath:/home/jenkins/minikube-integration/19423-1381335/.minikube/machines/addons-606349/id_rsa Username:docker}
	I0816 12:26:22.407605 1387479 out.go:177]   - Using image docker.io/busybox:stable
	I0816 12:26:22.411058 1387479 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0816 12:26:22.418294 1387479 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 12:26:22.418320 1387479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0816 12:26:22.419155 1387479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34595 SSHKeyPath:/home/jenkins/minikube-integration/19423-1381335/.minikube/machines/addons-606349/id_rsa Username:docker}
	I0816 12:26:22.419883 1387479 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0816 12:26:22.419905 1387479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0816 12:26:22.419964 1387479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606349
	I0816 12:26:22.420667 1387479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34595 SSHKeyPath:/home/jenkins/minikube-integration/19423-1381335/.minikube/machines/addons-606349/id_rsa Username:docker}
	I0816 12:26:22.441918 1387479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34595 SSHKeyPath:/home/jenkins/minikube-integration/19423-1381335/.minikube/machines/addons-606349/id_rsa Username:docker}
	W0816 12:26:22.443015 1387479 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0816 12:26:22.443043 1387479 retry.go:31] will retry after 253.190095ms: ssh: handshake failed: EOF
	I0816 12:26:22.455419 1387479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34595 SSHKeyPath:/home/jenkins/minikube-integration/19423-1381335/.minikube/machines/addons-606349/id_rsa Username:docker}
	I0816 12:26:22.553555 1387479 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 12:26:22.560042 1387479 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 12:26:22.685454 1387479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0816 12:26:22.701257 1387479 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0816 12:26:22.701318 1387479 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0816 12:26:22.703978 1387479 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 12:26:22.704041 1387479 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 12:26:22.715645 1387479 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0816 12:26:22.715711 1387479 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0816 12:26:22.751644 1387479 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0816 12:26:22.751708 1387479 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0816 12:26:22.756494 1387479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0816 12:26:22.762008 1387479 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0816 12:26:22.762072 1387479 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0816 12:26:22.783053 1387479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0816 12:26:22.794153 1387479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 12:26:22.796859 1387479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0816 12:26:22.808356 1387479 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0816 12:26:22.808430 1387479 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0816 12:26:22.827184 1387479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0816 12:26:22.871565 1387479 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0816 12:26:22.871629 1387479 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0816 12:26:22.874864 1387479 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0816 12:26:22.874925 1387479 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0816 12:26:22.875468 1387479 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0816 12:26:22.875509 1387479 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0816 12:26:22.900568 1387479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 12:26:22.907833 1387479 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0816 12:26:22.907898 1387479 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0816 12:26:22.957262 1387479 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0816 12:26:22.957326 1387479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0816 12:26:22.980125 1387479 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0816 12:26:22.980188 1387479 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0816 12:26:23.010188 1387479 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0816 12:26:23.010271 1387479 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0816 12:26:23.045261 1387479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 12:26:23.100440 1387479 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0816 12:26:23.100513 1387479 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0816 12:26:23.103595 1387479 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0816 12:26:23.103662 1387479 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0816 12:26:23.125222 1387479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0816 12:26:23.160647 1387479 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0816 12:26:23.160731 1387479 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0816 12:26:23.204177 1387479 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0816 12:26:23.204264 1387479 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0816 12:26:23.270022 1387479 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0816 12:26:23.270096 1387479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0816 12:26:23.299741 1387479 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0816 12:26:23.299808 1387479 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0816 12:26:23.318244 1387479 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0816 12:26:23.318318 1387479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0816 12:26:23.369932 1387479 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0816 12:26:23.370005 1387479 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0816 12:26:23.430629 1387479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0816 12:26:23.447156 1387479 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0816 12:26:23.447230 1387479 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0816 12:26:23.455221 1387479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0816 12:26:23.474905 1387479 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0816 12:26:23.474988 1387479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0816 12:26:23.541306 1387479 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.724178461s)
	I0816 12:26:23.541407 1387479 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.559833859s)
	I0816 12:26:23.541574 1387479 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 12:26:23.541734 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0816 12:26:23.556141 1387479 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0816 12:26:23.556223 1387479 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0816 12:26:23.570749 1387479 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0816 12:26:23.570819 1387479 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0816 12:26:23.656933 1387479 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0816 12:26:23.657007 1387479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0816 12:26:23.663739 1387479 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0816 12:26:23.663804 1387479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0816 12:26:23.742251 1387479 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0816 12:26:23.742323 1387479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0816 12:26:23.760399 1387479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0816 12:26:23.860977 1387479 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0816 12:26:23.861046 1387479 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0816 12:26:23.981909 1387479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0816 12:26:26.382179 1387479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.696649581s)
	I0816 12:26:26.382239 1387479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.625675035s)
	I0816 12:26:28.818938 1387479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.035802448s)
	I0816 12:26:28.818974 1387479 addons.go:475] Verifying addon ingress=true in "addons-606349"
	I0816 12:26:28.819177 1387479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.024951562s)
	I0816 12:26:28.819257 1387479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.022313182s)
	I0816 12:26:28.819303 1387479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.992098672s)
	I0816 12:26:28.819541 1387479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.918900784s)
	I0816 12:26:28.819563 1387479 addons.go:475] Verifying addon metrics-server=true in "addons-606349"
	I0816 12:26:28.819591 1387479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.774255927s)
	I0816 12:26:28.819755 1387479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.694435854s)
	I0816 12:26:28.819769 1387479 addons.go:475] Verifying addon registry=true in "addons-606349"
	I0816 12:26:28.819871 1387479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.389163723s)
	I0816 12:26:28.822639 1387479 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-606349 service yakd-dashboard -n yakd-dashboard
	
	I0816 12:26:28.822747 1387479 out.go:177] * Verifying registry addon...
	I0816 12:26:28.822789 1387479 out.go:177] * Verifying ingress addon...
	I0816 12:26:28.826527 1387479 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0816 12:26:28.827515 1387479 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	W0816 12:26:28.852243 1387479 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0816 12:26:28.877939 1387479 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0816 12:26:28.878011 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:28.879079 1387479 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0816 12:26:28.879144 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:28.880213 1387479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.424894137s)
	W0816 12:26:28.880281 1387479 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0816 12:26:28.880313 1387479 retry.go:31] will retry after 181.794351ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0816 12:26:28.880372 1387479 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (5.338587386s)
	I0816 12:26:28.880404 1387479 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0816 12:26:28.880441 1387479 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.338854937s)
	I0816 12:26:28.881583 1387479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.121088195s)
	I0816 12:26:28.882672 1387479 node_ready.go:35] waiting up to 6m0s for node "addons-606349" to be "Ready" ...
	I0816 12:26:29.062958 1387479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0816 12:26:29.373380 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:29.379712 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:29.433197 1387479 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-606349" context rescaled to 1 replicas
	I0816 12:26:29.649411 1387479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.667401759s)
	I0816 12:26:29.649489 1387479 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-606349"
	I0816 12:26:29.652875 1387479 out.go:177] * Verifying csi-hostpath-driver addon...
	I0816 12:26:29.657486 1387479 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0816 12:26:29.670040 1387479 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0816 12:26:29.670069 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:29.832774 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:29.833833 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:30.163903 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:30.333831 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:30.336447 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:30.662624 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:30.833251 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:30.834589 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:30.888060 1387479 node_ready.go:53] node "addons-606349" has status "Ready":"False"
	I0816 12:26:31.164926 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:31.334601 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:31.335807 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:31.662527 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:31.843978 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:31.844729 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:32.173875 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:32.302524 1387479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.239476235s)
	I0816 12:26:32.333700 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:32.334233 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:32.662637 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:32.768571 1387479 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0816 12:26:32.768657 1387479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606349
	I0816 12:26:32.784631 1387479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34595 SSHKeyPath:/home/jenkins/minikube-integration/19423-1381335/.minikube/machines/addons-606349/id_rsa Username:docker}
	I0816 12:26:32.837343 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:32.838172 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:32.925508 1387479 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0816 12:26:32.945156 1387479 addons.go:234] Setting addon gcp-auth=true in "addons-606349"
	I0816 12:26:32.945252 1387479 host.go:66] Checking if "addons-606349" exists ...
	I0816 12:26:32.945771 1387479 cli_runner.go:164] Run: docker container inspect addons-606349 --format={{.State.Status}}
	I0816 12:26:32.961850 1387479 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0816 12:26:32.961910 1387479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606349
	I0816 12:26:32.978448 1387479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34595 SSHKeyPath:/home/jenkins/minikube-integration/19423-1381335/.minikube/machines/addons-606349/id_rsa Username:docker}
	I0816 12:26:33.103965 1387479 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0816 12:26:33.105911 1387479 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0816 12:26:33.107630 1387479 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0816 12:26:33.107648 1387479 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0816 12:26:33.127810 1387479 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0816 12:26:33.127838 1387479 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0816 12:26:33.149337 1387479 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0816 12:26:33.149359 1387479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0816 12:26:33.174616 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:33.183957 1387479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0816 12:26:33.335976 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:33.337158 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:33.391524 1387479 node_ready.go:53] node "addons-606349" has status "Ready":"False"
	I0816 12:26:33.663681 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:33.782154 1387479 addons.go:475] Verifying addon gcp-auth=true in "addons-606349"
	I0816 12:26:33.784300 1387479 out.go:177] * Verifying gcp-auth addon...
	I0816 12:26:33.787215 1387479 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0816 12:26:33.796312 1387479 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0816 12:26:33.796381 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:33.831396 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:33.831717 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:34.161973 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:34.290862 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:34.330172 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:34.331474 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:34.662775 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:34.792116 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:34.833200 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:34.834097 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:35.161064 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:35.297022 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:35.330754 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:35.332369 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:35.662268 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:35.790807 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:35.831089 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:35.831532 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:35.885805 1387479 node_ready.go:53] node "addons-606349" has status "Ready":"False"
	I0816 12:26:36.161565 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:36.291853 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:36.329974 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:36.331595 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:36.661189 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:36.790263 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:36.830836 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:36.831681 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:37.162266 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:37.290613 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:37.330612 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:37.331381 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:37.661652 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:37.791376 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:37.830883 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:37.831729 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:37.885997 1387479 node_ready.go:53] node "addons-606349" has status "Ready":"False"
	I0816 12:26:38.161257 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:38.290912 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:38.330175 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:38.331144 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:38.660938 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:38.790984 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:38.830082 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:38.831486 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:39.161525 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:39.290615 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:39.330752 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:39.331620 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:39.660891 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:39.790951 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:39.830289 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:39.831681 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:40.161733 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:40.291146 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:40.331370 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:40.331587 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:40.387022 1387479 node_ready.go:53] node "addons-606349" has status "Ready":"False"
	I0816 12:26:40.661505 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:40.791014 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:40.830403 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:40.832250 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:41.161354 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:41.290675 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:41.331021 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:41.332029 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:41.660857 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:41.791222 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:41.830272 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:41.832936 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:42.162129 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:42.291598 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:42.331769 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:42.332160 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:42.662238 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:42.791199 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:42.830977 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:42.831739 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:42.885952 1387479 node_ready.go:53] node "addons-606349" has status "Ready":"False"
	I0816 12:26:43.161207 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:43.291070 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:43.329910 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:43.330956 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:43.661570 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:43.790654 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:43.831153 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:43.831984 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:44.162107 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:44.291090 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:44.331056 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:44.331884 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:44.661825 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:44.791355 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:44.831463 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:44.831876 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:44.886675 1387479 node_ready.go:53] node "addons-606349" has status "Ready":"False"
	I0816 12:26:45.167473 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:45.291642 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:45.330849 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:45.333274 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:45.661130 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:45.790725 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:45.831280 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:45.832041 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:46.160998 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:46.291132 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:46.331402 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:46.332231 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:46.660799 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:46.791575 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:46.834459 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:46.835600 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:47.161661 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:47.290920 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:47.329630 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:47.331455 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:47.386729 1387479 node_ready.go:53] node "addons-606349" has status "Ready":"False"
	I0816 12:26:47.660709 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:47.791454 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:47.830188 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:47.831909 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:48.161429 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:48.290796 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:48.329809 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:48.332068 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:48.662769 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:48.790591 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:48.830779 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:48.831547 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:49.160899 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:49.291301 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:49.330641 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:49.332345 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:49.660642 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:49.790816 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:49.830063 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:49.831119 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:49.886414 1387479 node_ready.go:53] node "addons-606349" has status "Ready":"False"
	I0816 12:26:50.161580 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:50.291013 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:50.331238 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:50.332175 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:50.661217 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:50.790874 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:50.829964 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:50.831754 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:51.161870 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:51.290680 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:51.330431 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:51.331532 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:51.661356 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:51.790491 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:51.829707 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:51.830964 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:52.161617 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:52.291120 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:52.329843 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:52.331617 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:52.385657 1387479 node_ready.go:53] node "addons-606349" has status "Ready":"False"
	I0816 12:26:52.661217 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:52.790918 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:52.829715 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:52.831464 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:53.162564 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:53.290957 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:53.330408 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:53.342723 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:53.662030 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:53.791334 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:53.832182 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:53.834268 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:54.161945 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:54.291790 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:54.332338 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:54.332714 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:54.386089 1387479 node_ready.go:53] node "addons-606349" has status "Ready":"False"
	I0816 12:26:54.661870 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:54.791236 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:54.831350 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:54.831813 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:55.161232 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:55.291047 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:55.331908 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:55.332209 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:55.661015 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:55.790632 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:55.830865 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:55.831734 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:56.161548 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:56.290776 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:56.330585 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:56.331708 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:56.386640 1387479 node_ready.go:53] node "addons-606349" has status "Ready":"False"
	I0816 12:26:56.662363 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:56.792288 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:56.830618 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:56.832299 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:57.162459 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:57.291226 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:57.331424 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:57.331912 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:57.661495 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:57.790529 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:57.830202 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:57.831510 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:58.161454 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:58.291470 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:58.331172 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:58.331665 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:58.662150 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:58.790295 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:58.831899 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:58.832766 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:58.886424 1387479 node_ready.go:53] node "addons-606349" has status "Ready":"False"
	I0816 12:26:59.161257 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:59.290731 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:59.330597 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:59.332058 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:59.661834 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:59.791622 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:59.831186 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:59.831941 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:00.166695 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:00.297315 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:00.362271 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:00.362597 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:00.661340 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:00.790451 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:00.830404 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:00.831636 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:01.161087 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:01.290562 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:01.330792 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:01.331478 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:01.386104 1387479 node_ready.go:53] node "addons-606349" has status "Ready":"False"
	I0816 12:27:01.661508 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:01.791023 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:01.830413 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:01.831850 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:02.161691 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:02.291407 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:02.331212 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:02.332442 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:02.661364 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:02.791166 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:02.830820 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:02.831800 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:03.161826 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:03.290671 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:03.331735 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:03.332194 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:03.386471 1387479 node_ready.go:53] node "addons-606349" has status "Ready":"False"
	I0816 12:27:03.661681 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:03.791375 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:03.830671 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:03.832438 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:04.161130 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:04.290769 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:04.329944 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:04.333424 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:04.662148 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:04.790628 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:04.831305 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:04.832088 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:05.161361 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:05.290652 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:05.329797 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:05.331197 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:05.387530 1387479 node_ready.go:53] node "addons-606349" has status "Ready":"False"
	I0816 12:27:05.660931 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:05.791236 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:05.831085 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:05.831582 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:06.161463 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:06.290328 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:06.330945 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:06.332321 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:06.661982 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:06.791339 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:06.830705 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:06.831691 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:07.161182 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:07.290682 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:07.331007 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:07.331874 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:07.661192 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:07.790850 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:07.830832 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:07.831748 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:07.885981 1387479 node_ready.go:53] node "addons-606349" has status "Ready":"False"
	I0816 12:27:08.160979 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:08.291884 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:08.331100 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:08.331911 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:08.674533 1387479 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0816 12:27:08.674561 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:08.792883 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:08.908809 1387479 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0816 12:27:08.908835 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:08.923083 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:08.932328 1387479 node_ready.go:49] node "addons-606349" has status "Ready":"True"
	I0816 12:27:08.932356 1387479 node_ready.go:38] duration metric: took 40.049634146s for node "addons-606349" to be "Ready" ...
	I0816 12:27:08.932368 1387479 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 12:27:08.972961 1387479 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-8ctjp" in "kube-system" namespace to be "Ready" ...
	I0816 12:27:09.171999 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:09.302777 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:09.402285 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:09.404814 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:09.662675 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:09.791064 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:09.834304 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:09.835333 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:10.163620 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:10.290748 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:10.334200 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:10.375978 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:10.484834 1387479 pod_ready.go:93] pod "coredns-6f6b679f8f-8ctjp" in "kube-system" namespace has status "Ready":"True"
	I0816 12:27:10.484870 1387479 pod_ready.go:82] duration metric: took 1.511872407s for pod "coredns-6f6b679f8f-8ctjp" in "kube-system" namespace to be "Ready" ...
	I0816 12:27:10.484921 1387479 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-606349" in "kube-system" namespace to be "Ready" ...
	I0816 12:27:10.493243 1387479 pod_ready.go:93] pod "etcd-addons-606349" in "kube-system" namespace has status "Ready":"True"
	I0816 12:27:10.493266 1387479 pod_ready.go:82] duration metric: took 8.33299ms for pod "etcd-addons-606349" in "kube-system" namespace to be "Ready" ...
	I0816 12:27:10.493307 1387479 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-606349" in "kube-system" namespace to be "Ready" ...
	I0816 12:27:10.505161 1387479 pod_ready.go:93] pod "kube-apiserver-addons-606349" in "kube-system" namespace has status "Ready":"True"
	I0816 12:27:10.505201 1387479 pod_ready.go:82] duration metric: took 11.879941ms for pod "kube-apiserver-addons-606349" in "kube-system" namespace to be "Ready" ...
	I0816 12:27:10.505233 1387479 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-606349" in "kube-system" namespace to be "Ready" ...
	I0816 12:27:10.513574 1387479 pod_ready.go:93] pod "kube-controller-manager-addons-606349" in "kube-system" namespace has status "Ready":"True"
	I0816 12:27:10.513609 1387479 pod_ready.go:82] duration metric: took 8.361494ms for pod "kube-controller-manager-addons-606349" in "kube-system" namespace to be "Ready" ...
	I0816 12:27:10.513624 1387479 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vjdhm" in "kube-system" namespace to be "Ready" ...
	I0816 12:27:10.519990 1387479 pod_ready.go:93] pod "kube-proxy-vjdhm" in "kube-system" namespace has status "Ready":"True"
	I0816 12:27:10.520017 1387479 pod_ready.go:82] duration metric: took 6.385977ms for pod "kube-proxy-vjdhm" in "kube-system" namespace to be "Ready" ...
	I0816 12:27:10.520029 1387479 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-606349" in "kube-system" namespace to be "Ready" ...
	I0816 12:27:10.662464 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:10.791966 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:10.830527 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:10.832645 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:10.886808 1387479 pod_ready.go:93] pod "kube-scheduler-addons-606349" in "kube-system" namespace has status "Ready":"True"
	I0816 12:27:10.886833 1387479 pod_ready.go:82] duration metric: took 366.796151ms for pod "kube-scheduler-addons-606349" in "kube-system" namespace to be "Ready" ...
	I0816 12:27:10.886846 1387479 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace to be "Ready" ...
	I0816 12:27:11.163057 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:11.291877 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:11.342978 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:11.346028 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:11.676747 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:11.792035 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:11.834844 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:11.836150 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:12.170270 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:12.291905 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:12.333098 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:12.337145 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:12.663705 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:12.791617 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:12.835498 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:12.849462 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:12.894541 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:27:13.162949 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:13.290878 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:13.330411 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:13.333567 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:13.663875 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:13.791474 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:13.831482 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:13.834626 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:14.163689 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:14.291366 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:14.331546 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:14.333297 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:14.663551 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:14.791114 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:14.831914 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:14.832494 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:15.162346 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:15.291222 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:15.331065 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:15.332665 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:15.399927 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:27:15.665574 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:15.792151 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:15.833875 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:15.835122 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:16.162587 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:16.293370 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:16.332857 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:16.335730 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:16.665314 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:16.791017 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:16.831353 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:16.840973 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:17.163102 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:17.291276 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:17.331310 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:17.332384 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:17.662873 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:17.790830 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:17.831771 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:17.832820 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:17.893626 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:27:18.163118 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:18.291894 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:18.354987 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:18.364856 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:18.663333 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:18.792309 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:18.832721 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:18.833350 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:19.164355 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:19.291307 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:19.331760 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:19.332489 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:19.662873 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:19.793474 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:19.894133 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:19.895208 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:19.895674 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:27:20.163079 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:20.291463 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:20.331525 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:20.335731 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:20.664027 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:20.791629 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:20.838935 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:20.843745 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:21.162804 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:21.291540 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:21.334327 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:21.335686 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:21.663438 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:21.790861 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:21.831004 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:21.833789 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:22.163110 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:22.290917 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:22.333064 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:22.334158 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:22.393345 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:27:22.663523 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:22.791662 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:22.834882 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:22.835334 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:23.163558 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:23.291706 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:23.333678 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:23.334264 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:23.663374 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:23.791360 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:23.833703 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:23.835081 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:24.162963 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:24.292142 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:24.396168 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:24.397063 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:24.400095 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:27:24.664172 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:24.791502 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:24.834325 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:24.835358 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:25.164710 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:25.291386 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:25.333890 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:25.335431 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:25.664244 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:25.792580 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:25.840222 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:25.842687 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:26.162828 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:26.293165 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:26.332312 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:26.333139 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:26.663371 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:26.790825 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:26.830778 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:26.832824 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:26.893107 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:27:27.163667 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:27.290761 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:27.330738 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:27.332555 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:27.664925 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:27.799084 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:27.908163 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:27.909526 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:28.163916 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:28.294112 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:28.342370 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:28.343491 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:28.676350 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:28.791161 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:28.834335 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:28.835670 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:28.902150 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:27:29.163338 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:29.295824 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:29.335203 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:29.337135 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:29.664836 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:29.790706 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:29.834162 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:29.835845 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:30.167147 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:30.291758 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:30.394345 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:30.394838 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:30.662828 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:30.793735 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:30.895209 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:30.896134 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:31.163214 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:31.291638 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:31.331763 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:31.332807 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:31.393223 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:27:31.662393 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:31.791512 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:31.831714 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:31.833045 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:32.163040 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:32.291789 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:32.333863 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:32.336619 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:32.663154 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:32.792352 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:32.834620 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:32.837092 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:33.168491 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:33.291962 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:33.332708 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:33.332835 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:33.395207 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:27:33.664843 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:33.792171 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:33.832339 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:33.833159 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:34.162607 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:34.290749 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:34.332811 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:34.338106 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:34.664099 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:34.791495 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:34.846028 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:34.847794 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:35.165250 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:35.293074 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:35.333082 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:35.335768 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:35.396791 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:27:35.664545 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:35.791316 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:35.840041 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:35.843905 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:36.162703 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:36.291050 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:36.332168 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:36.333491 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:36.663627 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:36.795874 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:36.896019 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:36.897267 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:37.168152 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:37.291972 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:37.333586 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:37.336521 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:37.398944 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:27:37.663435 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:37.793289 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:37.831618 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:37.832532 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:38.165215 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:38.291497 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:38.335820 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:38.337592 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:38.667587 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:38.791094 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:38.832809 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:38.833349 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:39.166099 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:39.290975 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:39.331841 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:39.333828 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:39.662383 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:39.792455 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:39.831377 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:39.833009 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:39.893221 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:27:40.162550 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:40.290806 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:40.334605 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:40.337582 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:40.665324 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:40.791664 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:40.830794 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:40.834925 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:41.162839 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:41.291321 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:41.331463 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:41.332749 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:41.666403 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:41.792132 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:41.894208 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:27:41.894857 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:41.895675 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:42.163412 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:42.294946 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:42.333446 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:42.334456 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:42.662156 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:42.791349 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:42.833376 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:42.839777 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:43.164191 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:43.291621 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:43.333104 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:43.334214 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:43.665554 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:43.791377 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:43.832759 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:43.833623 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:43.918869 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:27:44.162762 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:44.291468 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:44.332917 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:44.333475 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:44.662676 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:44.791384 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:44.831860 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:44.832505 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:45.180045 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:45.291030 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:45.335475 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:45.339232 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:45.662593 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:45.791406 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:45.833587 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:45.834598 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:46.164512 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:46.291572 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:46.334750 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:46.335578 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:46.394609 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:27:46.664333 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:46.792287 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:46.834624 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:46.839353 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:47.166110 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:47.290618 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:47.330646 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:47.340649 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:47.664844 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:47.791607 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:47.831705 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:47.835234 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:48.165004 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:48.299325 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:48.333187 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:48.335150 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:48.663613 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:48.791706 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:48.837255 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:48.838339 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:48.896101 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:27:49.163437 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:49.292003 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:49.330577 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:49.340604 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:49.663036 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:49.791578 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:49.833962 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:49.835134 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:50.164308 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:50.291611 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:50.332851 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:50.334751 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:50.671899 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:50.792421 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:50.832530 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:50.835208 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:51.163688 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:51.292044 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:51.350401 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:51.353209 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:51.395395 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:27:51.665568 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:51.791922 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:51.838807 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:51.841412 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:52.163610 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:52.290874 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:52.332349 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:52.332923 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:52.663642 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:52.791877 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:52.831708 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:52.832341 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:53.163156 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:53.290470 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:53.330196 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:53.332717 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:53.663260 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:53.791242 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:53.830740 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:53.832471 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:53.893871 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:27:54.163024 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:54.291082 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:54.331441 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:54.333265 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:54.665739 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:54.792131 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:54.832160 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:54.833023 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:55.163260 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:55.291562 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:55.330036 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:55.333169 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:55.663100 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:55.793896 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:55.895014 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:27:55.895404 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:55.898473 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:56.163177 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:56.291558 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:56.345290 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:56.351275 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:56.663753 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:56.795794 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:56.830950 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:56.832855 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:57.181715 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:57.291972 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:57.330623 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:57.335219 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:57.664871 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:57.791931 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:57.836942 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:57.838330 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:57.897578 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:27:58.165068 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:58.291460 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:58.329952 1387479 kapi.go:107] duration metric: took 1m29.503417052s to wait for kubernetes.io/minikube-addons=registry ...
	I0816 12:27:58.332164 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:58.662066 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:58.790557 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:58.833275 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:59.162850 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:59.292142 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:59.334790 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:59.662947 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:59.791681 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:59.834662 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:28:00.224471 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:00.313741 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:28:00.355616 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:28:00.400909 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:28:00.662401 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:00.791665 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:28:00.832636 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:28:01.163356 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:01.291028 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:28:01.333582 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:28:01.674205 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:01.792172 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:28:01.836367 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:28:02.173076 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:02.292569 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:28:02.332588 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:28:02.662632 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:02.792513 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:28:02.839000 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:28:02.901650 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:28:03.163275 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:03.298696 1387479 kapi.go:107] duration metric: took 1m29.511469681s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0816 12:28:03.300917 1387479 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-606349 cluster.
	I0816 12:28:03.302851 1387479 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0816 12:28:03.305103 1387479 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0816 12:28:03.396773 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:28:03.662893 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:03.832334 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:28:04.162546 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:04.331735 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:28:04.662656 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:04.832684 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:28:05.163603 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:05.333919 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:28:05.392917 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:28:05.663185 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:05.833486 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:28:06.163316 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:06.332170 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:28:06.663710 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:06.833005 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:28:07.163146 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:07.332753 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:28:07.663365 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:07.832697 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:28:07.893815 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:28:08.163243 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:08.332961 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:28:08.663201 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:08.832067 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:28:09.162683 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:09.332110 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:28:09.662411 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:09.833544 1387479 kapi.go:107] duration metric: took 1m41.006012797s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0816 12:28:09.896115 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:28:10.163072 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:10.663877 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:11.168649 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:11.662800 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:12.163143 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:12.397889 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:28:12.663865 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:13.162656 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:13.664106 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:14.163514 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:14.662038 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:14.892831 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:28:15.162952 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:15.670824 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:16.163004 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:16.662693 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:16.893698 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:28:17.163504 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:17.663005 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:18.165035 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:18.662737 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:19.163715 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:19.393694 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:28:19.664049 1387479 kapi.go:107] duration metric: took 1m50.006557983s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0816 12:28:19.666033 1387479 out.go:177] * Enabled addons: cloud-spanner, nvidia-device-plugin, storage-provisioner, ingress-dns, metrics-server, yakd, default-storageclass, inspektor-gadget, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I0816 12:28:19.668150 1387479 addons.go:510] duration metric: took 1m57.850743784s for enable addons: enabled=[cloud-spanner nvidia-device-plugin storage-provisioner ingress-dns metrics-server yakd default-storageclass inspektor-gadget volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I0816 12:28:21.393804 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:28:23.893090 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:28:25.894940 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:28:28.393408 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:28:29.393687 1387479 pod_ready.go:93] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"True"
	I0816 12:28:29.393711 1387479 pod_ready.go:82] duration metric: took 1m18.506857719s for pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace to be "Ready" ...
	I0816 12:28:29.393724 1387479 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-tlscx" in "kube-system" namespace to be "Ready" ...
	I0816 12:28:29.399209 1387479 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-tlscx" in "kube-system" namespace has status "Ready":"True"
	I0816 12:28:29.399233 1387479 pod_ready.go:82] duration metric: took 5.500175ms for pod "nvidia-device-plugin-daemonset-tlscx" in "kube-system" namespace to be "Ready" ...
	I0816 12:28:29.399257 1387479 pod_ready.go:39] duration metric: took 1m20.46687626s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 12:28:29.399276 1387479 api_server.go:52] waiting for apiserver process to appear ...
	I0816 12:28:29.399308 1387479 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 12:28:29.399375 1387479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 12:28:29.457422 1387479 cri.go:89] found id: "8254d00c3ba90768ddb8796f419aa64db2c8219f603c6ab2a9315a0148773d80"
	I0816 12:28:29.457446 1387479 cri.go:89] found id: ""
	I0816 12:28:29.457453 1387479 logs.go:276] 1 containers: [8254d00c3ba90768ddb8796f419aa64db2c8219f603c6ab2a9315a0148773d80]
	I0816 12:28:29.457511 1387479 ssh_runner.go:195] Run: which crictl
	I0816 12:28:29.461058 1387479 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 12:28:29.461136 1387479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 12:28:29.502263 1387479 cri.go:89] found id: "20d8a65b34a9095a0855b56977c5799d94cf162bf5fc1ec0c7451fd646c0c928"
	I0816 12:28:29.502327 1387479 cri.go:89] found id: ""
	I0816 12:28:29.502341 1387479 logs.go:276] 1 containers: [20d8a65b34a9095a0855b56977c5799d94cf162bf5fc1ec0c7451fd646c0c928]
	I0816 12:28:29.502395 1387479 ssh_runner.go:195] Run: which crictl
	I0816 12:28:29.505822 1387479 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 12:28:29.505894 1387479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 12:28:29.545081 1387479 cri.go:89] found id: "bbdc93411ee89a2417b6b1e8a74f87b93757d61d1ff049726d814f588c7bd2e8"
	I0816 12:28:29.545106 1387479 cri.go:89] found id: ""
	I0816 12:28:29.545115 1387479 logs.go:276] 1 containers: [bbdc93411ee89a2417b6b1e8a74f87b93757d61d1ff049726d814f588c7bd2e8]
	I0816 12:28:29.545171 1387479 ssh_runner.go:195] Run: which crictl
	I0816 12:28:29.549095 1387479 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 12:28:29.549177 1387479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 12:28:29.597604 1387479 cri.go:89] found id: "5b36378235e83568da22d79bab9fd542bd2d8b811c3ac53446fdd2c0f8439b0a"
	I0816 12:28:29.597637 1387479 cri.go:89] found id: ""
	I0816 12:28:29.597645 1387479 logs.go:276] 1 containers: [5b36378235e83568da22d79bab9fd542bd2d8b811c3ac53446fdd2c0f8439b0a]
	I0816 12:28:29.597711 1387479 ssh_runner.go:195] Run: which crictl
	I0816 12:28:29.601306 1387479 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 12:28:29.601396 1387479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 12:28:29.642050 1387479 cri.go:89] found id: "c0d8bb8efc5a6717ce3e0b4ffce376f1bfe1b809957f70faf484d2798e7fa960"
	I0816 12:28:29.642073 1387479 cri.go:89] found id: ""
	I0816 12:28:29.642082 1387479 logs.go:276] 1 containers: [c0d8bb8efc5a6717ce3e0b4ffce376f1bfe1b809957f70faf484d2798e7fa960]
	I0816 12:28:29.642136 1387479 ssh_runner.go:195] Run: which crictl
	I0816 12:28:29.645654 1387479 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 12:28:29.645737 1387479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 12:28:29.684767 1387479 cri.go:89] found id: "5b54e04f88c26416f2b2d8d3d73b7067bd8d2dafbbfd236b383241449b084690"
	I0816 12:28:29.684835 1387479 cri.go:89] found id: ""
	I0816 12:28:29.684856 1387479 logs.go:276] 1 containers: [5b54e04f88c26416f2b2d8d3d73b7067bd8d2dafbbfd236b383241449b084690]
	I0816 12:28:29.684947 1387479 ssh_runner.go:195] Run: which crictl
	I0816 12:28:29.688303 1387479 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 12:28:29.688437 1387479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 12:28:29.735108 1387479 cri.go:89] found id: "e9086ec7c665816871c69a6b3c6e1521a95a6c3104276a412228ea68a619e2e4"
	I0816 12:28:29.735232 1387479 cri.go:89] found id: ""
	I0816 12:28:29.735255 1387479 logs.go:276] 1 containers: [e9086ec7c665816871c69a6b3c6e1521a95a6c3104276a412228ea68a619e2e4]
	I0816 12:28:29.735378 1387479 ssh_runner.go:195] Run: which crictl
	I0816 12:28:29.739400 1387479 logs.go:123] Gathering logs for kubelet ...
	I0816 12:28:29.739425 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 12:28:29.844499 1387479 logs.go:123] Gathering logs for dmesg ...
	I0816 12:28:29.844544 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 12:28:29.863666 1387479 logs.go:123] Gathering logs for kube-apiserver [8254d00c3ba90768ddb8796f419aa64db2c8219f603c6ab2a9315a0148773d80] ...
	I0816 12:28:29.863698 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8254d00c3ba90768ddb8796f419aa64db2c8219f603c6ab2a9315a0148773d80"
	I0816 12:28:29.920864 1387479 logs.go:123] Gathering logs for coredns [bbdc93411ee89a2417b6b1e8a74f87b93757d61d1ff049726d814f588c7bd2e8] ...
	I0816 12:28:29.920899 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bbdc93411ee89a2417b6b1e8a74f87b93757d61d1ff049726d814f588c7bd2e8"
	I0816 12:28:29.966050 1387479 logs.go:123] Gathering logs for kube-scheduler [5b36378235e83568da22d79bab9fd542bd2d8b811c3ac53446fdd2c0f8439b0a] ...
	I0816 12:28:29.966082 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b36378235e83568da22d79bab9fd542bd2d8b811c3ac53446fdd2c0f8439b0a"
	I0816 12:28:30.062407 1387479 logs.go:123] Gathering logs for CRI-O ...
	I0816 12:28:30.062449 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 12:28:30.162425 1387479 logs.go:123] Gathering logs for describe nodes ...
	I0816 12:28:30.162471 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 12:28:30.373504 1387479 logs.go:123] Gathering logs for etcd [20d8a65b34a9095a0855b56977c5799d94cf162bf5fc1ec0c7451fd646c0c928] ...
	I0816 12:28:30.373554 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20d8a65b34a9095a0855b56977c5799d94cf162bf5fc1ec0c7451fd646c0c928"
	I0816 12:28:30.427062 1387479 logs.go:123] Gathering logs for kube-proxy [c0d8bb8efc5a6717ce3e0b4ffce376f1bfe1b809957f70faf484d2798e7fa960] ...
	I0816 12:28:30.427101 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0d8bb8efc5a6717ce3e0b4ffce376f1bfe1b809957f70faf484d2798e7fa960"
	I0816 12:28:30.467133 1387479 logs.go:123] Gathering logs for kube-controller-manager [5b54e04f88c26416f2b2d8d3d73b7067bd8d2dafbbfd236b383241449b084690] ...
	I0816 12:28:30.467162 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b54e04f88c26416f2b2d8d3d73b7067bd8d2dafbbfd236b383241449b084690"
	I0816 12:28:30.540798 1387479 logs.go:123] Gathering logs for kindnet [e9086ec7c665816871c69a6b3c6e1521a95a6c3104276a412228ea68a619e2e4] ...
	I0816 12:28:30.540836 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e9086ec7c665816871c69a6b3c6e1521a95a6c3104276a412228ea68a619e2e4"
	I0816 12:28:30.588084 1387479 logs.go:123] Gathering logs for container status ...
	I0816 12:28:30.588116 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 12:28:33.152208 1387479 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 12:28:33.166410 1387479 api_server.go:72] duration metric: took 2m11.349407348s to wait for apiserver process to appear ...
	I0816 12:28:33.166437 1387479 api_server.go:88] waiting for apiserver healthz status ...
	I0816 12:28:33.166474 1387479 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 12:28:33.166533 1387479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 12:28:33.208796 1387479 cri.go:89] found id: "8254d00c3ba90768ddb8796f419aa64db2c8219f603c6ab2a9315a0148773d80"
	I0816 12:28:33.208816 1387479 cri.go:89] found id: ""
	I0816 12:28:33.208825 1387479 logs.go:276] 1 containers: [8254d00c3ba90768ddb8796f419aa64db2c8219f603c6ab2a9315a0148773d80]
	I0816 12:28:33.208884 1387479 ssh_runner.go:195] Run: which crictl
	I0816 12:28:33.212497 1387479 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 12:28:33.212614 1387479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 12:28:33.253888 1387479 cri.go:89] found id: "20d8a65b34a9095a0855b56977c5799d94cf162bf5fc1ec0c7451fd646c0c928"
	I0816 12:28:33.253960 1387479 cri.go:89] found id: ""
	I0816 12:28:33.253981 1387479 logs.go:276] 1 containers: [20d8a65b34a9095a0855b56977c5799d94cf162bf5fc1ec0c7451fd646c0c928]
	I0816 12:28:33.254071 1387479 ssh_runner.go:195] Run: which crictl
	I0816 12:28:33.258250 1387479 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 12:28:33.258328 1387479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 12:28:33.296218 1387479 cri.go:89] found id: "bbdc93411ee89a2417b6b1e8a74f87b93757d61d1ff049726d814f588c7bd2e8"
	I0816 12:28:33.296241 1387479 cri.go:89] found id: ""
	I0816 12:28:33.296250 1387479 logs.go:276] 1 containers: [bbdc93411ee89a2417b6b1e8a74f87b93757d61d1ff049726d814f588c7bd2e8]
	I0816 12:28:33.296307 1387479 ssh_runner.go:195] Run: which crictl
	I0816 12:28:33.299837 1387479 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 12:28:33.299911 1387479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 12:28:33.340233 1387479 cri.go:89] found id: "5b36378235e83568da22d79bab9fd542bd2d8b811c3ac53446fdd2c0f8439b0a"
	I0816 12:28:33.340256 1387479 cri.go:89] found id: ""
	I0816 12:28:33.340265 1387479 logs.go:276] 1 containers: [5b36378235e83568da22d79bab9fd542bd2d8b811c3ac53446fdd2c0f8439b0a]
	I0816 12:28:33.340321 1387479 ssh_runner.go:195] Run: which crictl
	I0816 12:28:33.343860 1387479 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 12:28:33.343928 1387479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 12:28:33.387651 1387479 cri.go:89] found id: "c0d8bb8efc5a6717ce3e0b4ffce376f1bfe1b809957f70faf484d2798e7fa960"
	I0816 12:28:33.387674 1387479 cri.go:89] found id: ""
	I0816 12:28:33.387682 1387479 logs.go:276] 1 containers: [c0d8bb8efc5a6717ce3e0b4ffce376f1bfe1b809957f70faf484d2798e7fa960]
	I0816 12:28:33.387742 1387479 ssh_runner.go:195] Run: which crictl
	I0816 12:28:33.391358 1387479 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 12:28:33.391431 1387479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 12:28:33.429884 1387479 cri.go:89] found id: "5b54e04f88c26416f2b2d8d3d73b7067bd8d2dafbbfd236b383241449b084690"
	I0816 12:28:33.429910 1387479 cri.go:89] found id: ""
	I0816 12:28:33.429919 1387479 logs.go:276] 1 containers: [5b54e04f88c26416f2b2d8d3d73b7067bd8d2dafbbfd236b383241449b084690]
	I0816 12:28:33.429974 1387479 ssh_runner.go:195] Run: which crictl
	I0816 12:28:33.433533 1387479 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 12:28:33.433637 1387479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 12:28:33.478064 1387479 cri.go:89] found id: "e9086ec7c665816871c69a6b3c6e1521a95a6c3104276a412228ea68a619e2e4"
	I0816 12:28:33.478087 1387479 cri.go:89] found id: ""
	I0816 12:28:33.478095 1387479 logs.go:276] 1 containers: [e9086ec7c665816871c69a6b3c6e1521a95a6c3104276a412228ea68a619e2e4]
	I0816 12:28:33.478149 1387479 ssh_runner.go:195] Run: which crictl
	I0816 12:28:33.481734 1387479 logs.go:123] Gathering logs for kindnet [e9086ec7c665816871c69a6b3c6e1521a95a6c3104276a412228ea68a619e2e4] ...
	I0816 12:28:33.481824 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e9086ec7c665816871c69a6b3c6e1521a95a6c3104276a412228ea68a619e2e4"
	I0816 12:28:33.555958 1387479 logs.go:123] Gathering logs for CRI-O ...
	I0816 12:28:33.555994 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 12:28:33.659868 1387479 logs.go:123] Gathering logs for container status ...
	I0816 12:28:33.659946 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 12:28:33.727812 1387479 logs.go:123] Gathering logs for kubelet ...
	I0816 12:28:33.727843 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 12:28:33.844973 1387479 logs.go:123] Gathering logs for dmesg ...
	I0816 12:28:33.845011 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 12:28:33.864027 1387479 logs.go:123] Gathering logs for kube-apiserver [8254d00c3ba90768ddb8796f419aa64db2c8219f603c6ab2a9315a0148773d80] ...
	I0816 12:28:33.864065 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8254d00c3ba90768ddb8796f419aa64db2c8219f603c6ab2a9315a0148773d80"
	I0816 12:28:33.923895 1387479 logs.go:123] Gathering logs for etcd [20d8a65b34a9095a0855b56977c5799d94cf162bf5fc1ec0c7451fd646c0c928] ...
	I0816 12:28:33.923928 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20d8a65b34a9095a0855b56977c5799d94cf162bf5fc1ec0c7451fd646c0c928"
	I0816 12:28:33.978211 1387479 logs.go:123] Gathering logs for kube-controller-manager [5b54e04f88c26416f2b2d8d3d73b7067bd8d2dafbbfd236b383241449b084690] ...
	I0816 12:28:33.978246 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b54e04f88c26416f2b2d8d3d73b7067bd8d2dafbbfd236b383241449b084690"
	I0816 12:28:34.075227 1387479 logs.go:123] Gathering logs for describe nodes ...
	I0816 12:28:34.075266 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 12:28:34.222992 1387479 logs.go:123] Gathering logs for coredns [bbdc93411ee89a2417b6b1e8a74f87b93757d61d1ff049726d814f588c7bd2e8] ...
	I0816 12:28:34.223024 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bbdc93411ee89a2417b6b1e8a74f87b93757d61d1ff049726d814f588c7bd2e8"
	I0816 12:28:34.264047 1387479 logs.go:123] Gathering logs for kube-scheduler [5b36378235e83568da22d79bab9fd542bd2d8b811c3ac53446fdd2c0f8439b0a] ...
	I0816 12:28:34.264077 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b36378235e83568da22d79bab9fd542bd2d8b811c3ac53446fdd2c0f8439b0a"
	I0816 12:28:34.312494 1387479 logs.go:123] Gathering logs for kube-proxy [c0d8bb8efc5a6717ce3e0b4ffce376f1bfe1b809957f70faf484d2798e7fa960] ...
	I0816 12:28:34.312526 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0d8bb8efc5a6717ce3e0b4ffce376f1bfe1b809957f70faf484d2798e7fa960"
	I0816 12:28:36.853727 1387479 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0816 12:28:36.862140 1387479 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0816 12:28:36.863277 1387479 api_server.go:141] control plane version: v1.31.0
	I0816 12:28:36.863308 1387479 api_server.go:131] duration metric: took 3.696864236s to wait for apiserver health ...
	I0816 12:28:36.863318 1387479 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 12:28:36.863339 1387479 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 12:28:36.863406 1387479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 12:28:36.902968 1387479 cri.go:89] found id: "8254d00c3ba90768ddb8796f419aa64db2c8219f603c6ab2a9315a0148773d80"
	I0816 12:28:36.902991 1387479 cri.go:89] found id: ""
	I0816 12:28:36.902998 1387479 logs.go:276] 1 containers: [8254d00c3ba90768ddb8796f419aa64db2c8219f603c6ab2a9315a0148773d80]
	I0816 12:28:36.903087 1387479 ssh_runner.go:195] Run: which crictl
	I0816 12:28:36.906655 1387479 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 12:28:36.906731 1387479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 12:28:36.945628 1387479 cri.go:89] found id: "20d8a65b34a9095a0855b56977c5799d94cf162bf5fc1ec0c7451fd646c0c928"
	I0816 12:28:36.945694 1387479 cri.go:89] found id: ""
	I0816 12:28:36.945716 1387479 logs.go:276] 1 containers: [20d8a65b34a9095a0855b56977c5799d94cf162bf5fc1ec0c7451fd646c0c928]
	I0816 12:28:36.945829 1387479 ssh_runner.go:195] Run: which crictl
	I0816 12:28:36.949385 1387479 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 12:28:36.949469 1387479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 12:28:36.991004 1387479 cri.go:89] found id: "bbdc93411ee89a2417b6b1e8a74f87b93757d61d1ff049726d814f588c7bd2e8"
	I0816 12:28:36.991029 1387479 cri.go:89] found id: ""
	I0816 12:28:36.991036 1387479 logs.go:276] 1 containers: [bbdc93411ee89a2417b6b1e8a74f87b93757d61d1ff049726d814f588c7bd2e8]
	I0816 12:28:36.991092 1387479 ssh_runner.go:195] Run: which crictl
	I0816 12:28:36.994758 1387479 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 12:28:36.994894 1387479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 12:28:37.052708 1387479 cri.go:89] found id: "5b36378235e83568da22d79bab9fd542bd2d8b811c3ac53446fdd2c0f8439b0a"
	I0816 12:28:37.053822 1387479 cri.go:89] found id: ""
	I0816 12:28:37.053860 1387479 logs.go:276] 1 containers: [5b36378235e83568da22d79bab9fd542bd2d8b811c3ac53446fdd2c0f8439b0a]
	I0816 12:28:37.053930 1387479 ssh_runner.go:195] Run: which crictl
	I0816 12:28:37.059581 1387479 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 12:28:37.059707 1387479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 12:28:37.101933 1387479 cri.go:89] found id: "c0d8bb8efc5a6717ce3e0b4ffce376f1bfe1b809957f70faf484d2798e7fa960"
	I0816 12:28:37.101957 1387479 cri.go:89] found id: ""
	I0816 12:28:37.101965 1387479 logs.go:276] 1 containers: [c0d8bb8efc5a6717ce3e0b4ffce376f1bfe1b809957f70faf484d2798e7fa960]
	I0816 12:28:37.102022 1387479 ssh_runner.go:195] Run: which crictl
	I0816 12:28:37.105575 1387479 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 12:28:37.105648 1387479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 12:28:37.151389 1387479 cri.go:89] found id: "5b54e04f88c26416f2b2d8d3d73b7067bd8d2dafbbfd236b383241449b084690"
	I0816 12:28:37.151414 1387479 cri.go:89] found id: ""
	I0816 12:28:37.151423 1387479 logs.go:276] 1 containers: [5b54e04f88c26416f2b2d8d3d73b7067bd8d2dafbbfd236b383241449b084690]
	I0816 12:28:37.151510 1387479 ssh_runner.go:195] Run: which crictl
	I0816 12:28:37.155322 1387479 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 12:28:37.155423 1387479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 12:28:37.196293 1387479 cri.go:89] found id: "e9086ec7c665816871c69a6b3c6e1521a95a6c3104276a412228ea68a619e2e4"
	I0816 12:28:37.196326 1387479 cri.go:89] found id: ""
	I0816 12:28:37.196335 1387479 logs.go:276] 1 containers: [e9086ec7c665816871c69a6b3c6e1521a95a6c3104276a412228ea68a619e2e4]
	I0816 12:28:37.196409 1387479 ssh_runner.go:195] Run: which crictl
	I0816 12:28:37.200119 1387479 logs.go:123] Gathering logs for dmesg ...
	I0816 12:28:37.200195 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 12:28:37.217260 1387479 logs.go:123] Gathering logs for kube-apiserver [8254d00c3ba90768ddb8796f419aa64db2c8219f603c6ab2a9315a0148773d80] ...
	I0816 12:28:37.217336 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8254d00c3ba90768ddb8796f419aa64db2c8219f603c6ab2a9315a0148773d80"
	I0816 12:28:37.289119 1387479 logs.go:123] Gathering logs for etcd [20d8a65b34a9095a0855b56977c5799d94cf162bf5fc1ec0c7451fd646c0c928] ...
	I0816 12:28:37.289162 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20d8a65b34a9095a0855b56977c5799d94cf162bf5fc1ec0c7451fd646c0c928"
	I0816 12:28:37.342084 1387479 logs.go:123] Gathering logs for kube-scheduler [5b36378235e83568da22d79bab9fd542bd2d8b811c3ac53446fdd2c0f8439b0a] ...
	I0816 12:28:37.342121 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b36378235e83568da22d79bab9fd542bd2d8b811c3ac53446fdd2c0f8439b0a"
	I0816 12:28:37.394454 1387479 logs.go:123] Gathering logs for kube-controller-manager [5b54e04f88c26416f2b2d8d3d73b7067bd8d2dafbbfd236b383241449b084690] ...
	I0816 12:28:37.394493 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b54e04f88c26416f2b2d8d3d73b7067bd8d2dafbbfd236b383241449b084690"
	I0816 12:28:37.461372 1387479 logs.go:123] Gathering logs for container status ...
	I0816 12:28:37.461412 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 12:28:37.528629 1387479 logs.go:123] Gathering logs for kubelet ...
	I0816 12:28:37.528661 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 12:28:37.637062 1387479 logs.go:123] Gathering logs for describe nodes ...
	I0816 12:28:37.637102 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 12:28:37.775552 1387479 logs.go:123] Gathering logs for coredns [bbdc93411ee89a2417b6b1e8a74f87b93757d61d1ff049726d814f588c7bd2e8] ...
	I0816 12:28:37.775584 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bbdc93411ee89a2417b6b1e8a74f87b93757d61d1ff049726d814f588c7bd2e8"
	I0816 12:28:37.824816 1387479 logs.go:123] Gathering logs for kube-proxy [c0d8bb8efc5a6717ce3e0b4ffce376f1bfe1b809957f70faf484d2798e7fa960] ...
	I0816 12:28:37.824849 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0d8bb8efc5a6717ce3e0b4ffce376f1bfe1b809957f70faf484d2798e7fa960"
	I0816 12:28:37.865321 1387479 logs.go:123] Gathering logs for kindnet [e9086ec7c665816871c69a6b3c6e1521a95a6c3104276a412228ea68a619e2e4] ...
	I0816 12:28:37.865351 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e9086ec7c665816871c69a6b3c6e1521a95a6c3104276a412228ea68a619e2e4"
	I0816 12:28:37.922440 1387479 logs.go:123] Gathering logs for CRI-O ...
	I0816 12:28:37.922476 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 12:28:40.538913 1387479 system_pods.go:59] 18 kube-system pods found
	I0816 12:28:40.538957 1387479 system_pods.go:61] "coredns-6f6b679f8f-8ctjp" [1dd36daf-8683-4242-8ac3-9a037d03b77d] Running
	I0816 12:28:40.538965 1387479 system_pods.go:61] "csi-hostpath-attacher-0" [faaebc96-a57a-4ba1-9b1b-9af9eda2bfaa] Running
	I0816 12:28:40.538970 1387479 system_pods.go:61] "csi-hostpath-resizer-0" [5c750c14-1267-4831-b07d-f1340d77d353] Running
	I0816 12:28:40.538975 1387479 system_pods.go:61] "csi-hostpathplugin-82nxb" [c0368736-0e64-416c-8421-8681c40ed712] Running
	I0816 12:28:40.538979 1387479 system_pods.go:61] "etcd-addons-606349" [e11563de-8441-4a42-9c49-ee724454e4d3] Running
	I0816 12:28:40.538983 1387479 system_pods.go:61] "kindnet-5jgmz" [3f101520-e1b8-4170-8ca5-94d6a290443e] Running
	I0816 12:28:40.538988 1387479 system_pods.go:61] "kube-apiserver-addons-606349" [176e3fad-50a6-4223-b90c-3ef3e52c7289] Running
	I0816 12:28:40.538992 1387479 system_pods.go:61] "kube-controller-manager-addons-606349" [607563de-f7a6-4d48-b359-2a6bd36a1252] Running
	I0816 12:28:40.538998 1387479 system_pods.go:61] "kube-ingress-dns-minikube" [ff0ffcea-ad8a-44e3-a010-29d571f3bd06] Running
	I0816 12:28:40.539002 1387479 system_pods.go:61] "kube-proxy-vjdhm" [f62a6b13-cf4c-49e6-b710-dcc4bdb8d830] Running
	I0816 12:28:40.539006 1387479 system_pods.go:61] "kube-scheduler-addons-606349" [c0c34f3e-eee1-4bd3-bac1-6d70f95c1cdd] Running
	I0816 12:28:40.539013 1387479 system_pods.go:61] "metrics-server-8988944d9-lfhc7" [93c15fce-49db-484e-817d-4f2f088bd4e5] Running
	I0816 12:28:40.539017 1387479 system_pods.go:61] "nvidia-device-plugin-daemonset-tlscx" [50afed3c-442a-4c9e-b404-875b12dd96e9] Running
	I0816 12:28:40.539021 1387479 system_pods.go:61] "registry-6fb4cdfc84-pbm8s" [73faa728-22c2-4a32-a43d-85763f935998] Running
	I0816 12:28:40.539026 1387479 system_pods.go:61] "registry-proxy-xqwvx" [a9e788b9-88d0-492b-8001-c0da62bb7adc] Running
	I0816 12:28:40.539038 1387479 system_pods.go:61] "snapshot-controller-56fcc65765-mjvvx" [ce222d15-6641-4c9b-b583-6c9c45a34880] Running
	I0816 12:28:40.539042 1387479 system_pods.go:61] "snapshot-controller-56fcc65765-q8vp5" [83bb85d3-0be7-46b5-86a9-aa9f949b555f] Running
	I0816 12:28:40.539046 1387479 system_pods.go:61] "storage-provisioner" [42e6183e-b46d-4e8d-8c94-b53653e34dca] Running
	I0816 12:28:40.539057 1387479 system_pods.go:74] duration metric: took 3.675731692s to wait for pod list to return data ...
	I0816 12:28:40.539069 1387479 default_sa.go:34] waiting for default service account to be created ...
	I0816 12:28:40.541919 1387479 default_sa.go:45] found service account: "default"
	I0816 12:28:40.541954 1387479 default_sa.go:55] duration metric: took 2.875563ms for default service account to be created ...
	I0816 12:28:40.541965 1387479 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 12:28:40.552339 1387479 system_pods.go:86] 18 kube-system pods found
	I0816 12:28:40.552388 1387479 system_pods.go:89] "coredns-6f6b679f8f-8ctjp" [1dd36daf-8683-4242-8ac3-9a037d03b77d] Running
	I0816 12:28:40.552398 1387479 system_pods.go:89] "csi-hostpath-attacher-0" [faaebc96-a57a-4ba1-9b1b-9af9eda2bfaa] Running
	I0816 12:28:40.552403 1387479 system_pods.go:89] "csi-hostpath-resizer-0" [5c750c14-1267-4831-b07d-f1340d77d353] Running
	I0816 12:28:40.552407 1387479 system_pods.go:89] "csi-hostpathplugin-82nxb" [c0368736-0e64-416c-8421-8681c40ed712] Running
	I0816 12:28:40.552413 1387479 system_pods.go:89] "etcd-addons-606349" [e11563de-8441-4a42-9c49-ee724454e4d3] Running
	I0816 12:28:40.552417 1387479 system_pods.go:89] "kindnet-5jgmz" [3f101520-e1b8-4170-8ca5-94d6a290443e] Running
	I0816 12:28:40.552422 1387479 system_pods.go:89] "kube-apiserver-addons-606349" [176e3fad-50a6-4223-b90c-3ef3e52c7289] Running
	I0816 12:28:40.552427 1387479 system_pods.go:89] "kube-controller-manager-addons-606349" [607563de-f7a6-4d48-b359-2a6bd36a1252] Running
	I0816 12:28:40.552431 1387479 system_pods.go:89] "kube-ingress-dns-minikube" [ff0ffcea-ad8a-44e3-a010-29d571f3bd06] Running
	I0816 12:28:40.552436 1387479 system_pods.go:89] "kube-proxy-vjdhm" [f62a6b13-cf4c-49e6-b710-dcc4bdb8d830] Running
	I0816 12:28:40.552440 1387479 system_pods.go:89] "kube-scheduler-addons-606349" [c0c34f3e-eee1-4bd3-bac1-6d70f95c1cdd] Running
	I0816 12:28:40.552446 1387479 system_pods.go:89] "metrics-server-8988944d9-lfhc7" [93c15fce-49db-484e-817d-4f2f088bd4e5] Running
	I0816 12:28:40.552451 1387479 system_pods.go:89] "nvidia-device-plugin-daemonset-tlscx" [50afed3c-442a-4c9e-b404-875b12dd96e9] Running
	I0816 12:28:40.552455 1387479 system_pods.go:89] "registry-6fb4cdfc84-pbm8s" [73faa728-22c2-4a32-a43d-85763f935998] Running
	I0816 12:28:40.552461 1387479 system_pods.go:89] "registry-proxy-xqwvx" [a9e788b9-88d0-492b-8001-c0da62bb7adc] Running
	I0816 12:28:40.552465 1387479 system_pods.go:89] "snapshot-controller-56fcc65765-mjvvx" [ce222d15-6641-4c9b-b583-6c9c45a34880] Running
	I0816 12:28:40.552469 1387479 system_pods.go:89] "snapshot-controller-56fcc65765-q8vp5" [83bb85d3-0be7-46b5-86a9-aa9f949b555f] Running
	I0816 12:28:40.552473 1387479 system_pods.go:89] "storage-provisioner" [42e6183e-b46d-4e8d-8c94-b53653e34dca] Running
	I0816 12:28:40.552484 1387479 system_pods.go:126] duration metric: took 10.512296ms to wait for k8s-apps to be running ...
	I0816 12:28:40.552492 1387479 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 12:28:40.552557 1387479 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 12:28:40.565507 1387479 system_svc.go:56] duration metric: took 13.005118ms WaitForService to wait for kubelet
	I0816 12:28:40.565560 1387479 kubeadm.go:582] duration metric: took 2m18.748562439s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 12:28:40.565583 1387479 node_conditions.go:102] verifying NodePressure condition ...
	I0816 12:28:40.569211 1387479 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0816 12:28:40.569246 1387479 node_conditions.go:123] node cpu capacity is 2
	I0816 12:28:40.569261 1387479 node_conditions.go:105] duration metric: took 3.670413ms to run NodePressure ...
	I0816 12:28:40.569273 1387479 start.go:241] waiting for startup goroutines ...
	I0816 12:28:40.569281 1387479 start.go:246] waiting for cluster config update ...
	I0816 12:28:40.569298 1387479 start.go:255] writing updated cluster config ...
	I0816 12:28:40.569618 1387479 ssh_runner.go:195] Run: rm -f paused
	I0816 12:28:40.917637 1387479 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 12:28:40.921597 1387479 out.go:177] * Done! kubectl is now configured to use "addons-606349" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 16 12:32:36 addons-606349 crio[958]: time="2024-08-16 12:32:36.573290045Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=3c07dad0-a154-4c6c-b832-1bdedf0e61f0 name=/runtime.v1.ImageService/ImageStatus
	Aug 16 12:32:36 addons-606349 crio[958]: time="2024-08-16 12:32:36.575192278Z" level=info msg="Creating container: default/hello-world-app-55bf9c44b4-ktmlr/hello-world-app" id=031d1f5a-fde0-4b8f-b5fe-7890de5d75a5 name=/runtime.v1.RuntimeService/CreateContainer
	Aug 16 12:32:36 addons-606349 crio[958]: time="2024-08-16 12:32:36.575290115Z" level=warning msg="Allowed annotations are specified for workload []"
	Aug 16 12:32:36 addons-606349 crio[958]: time="2024-08-16 12:32:36.599525037Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/dd24b934ed469de073ec7a0b3c7de2cdcb0330e45c4d2e66ef524df83b502231/merged/etc/passwd: no such file or directory"
	Aug 16 12:32:36 addons-606349 crio[958]: time="2024-08-16 12:32:36.599712310Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/dd24b934ed469de073ec7a0b3c7de2cdcb0330e45c4d2e66ef524df83b502231/merged/etc/group: no such file or directory"
	Aug 16 12:32:36 addons-606349 crio[958]: time="2024-08-16 12:32:36.638248138Z" level=info msg="Created container 52b2678acd353936003c005aed51905c7de4607c49afd0968d50e6bb60664054: default/hello-world-app-55bf9c44b4-ktmlr/hello-world-app" id=031d1f5a-fde0-4b8f-b5fe-7890de5d75a5 name=/runtime.v1.RuntimeService/CreateContainer
	Aug 16 12:32:36 addons-606349 crio[958]: time="2024-08-16 12:32:36.639153557Z" level=info msg="Starting container: 52b2678acd353936003c005aed51905c7de4607c49afd0968d50e6bb60664054" id=e15beead-c920-488b-9823-0364f41c9691 name=/runtime.v1.RuntimeService/StartContainer
	Aug 16 12:32:36 addons-606349 crio[958]: time="2024-08-16 12:32:36.646916451Z" level=info msg="Started container" PID=7297 containerID=52b2678acd353936003c005aed51905c7de4607c49afd0968d50e6bb60664054 description=default/hello-world-app-55bf9c44b4-ktmlr/hello-world-app id=e15beead-c920-488b-9823-0364f41c9691 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e4c59245ac3a2fe5be6a63504388d2dc0e213eeac2efdb14670f755bfd382c9c
	Aug 16 12:32:37 addons-606349 crio[958]: time="2024-08-16 12:32:37.246589319Z" level=info msg="Removing container: d2c92b44534a2ede72f9b38cde81395e1f58bfab5abbaa8283e7302f95585fac" id=998317d8-1b55-443b-b709-47d64c4a84fa name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 16 12:32:37 addons-606349 crio[958]: time="2024-08-16 12:32:37.270770441Z" level=info msg="Removed container d2c92b44534a2ede72f9b38cde81395e1f58bfab5abbaa8283e7302f95585fac: kube-system/kube-ingress-dns-minikube/minikube-ingress-dns" id=998317d8-1b55-443b-b709-47d64c4a84fa name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 16 12:32:38 addons-606349 crio[958]: time="2024-08-16 12:32:38.965076377Z" level=info msg="Stopping container: f50d93afe39d085ae45ebecc1ee7a3962eba5edc53b7a1fe92a3bd0ef0c04a05 (timeout: 2s)" id=a7d3b0ab-a4ca-4aed-82b1-079a0651bd89 name=/runtime.v1.RuntimeService/StopContainer
	Aug 16 12:32:40 addons-606349 crio[958]: time="2024-08-16 12:32:40.971029882Z" level=warning msg="Stopping container f50d93afe39d085ae45ebecc1ee7a3962eba5edc53b7a1fe92a3bd0ef0c04a05 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=a7d3b0ab-a4ca-4aed-82b1-079a0651bd89 name=/runtime.v1.RuntimeService/StopContainer
	Aug 16 12:32:41 addons-606349 conmon[4532]: conmon f50d93afe39d085ae45e <ninfo>: container 4543 exited with status 137
	Aug 16 12:32:41 addons-606349 crio[958]: time="2024-08-16 12:32:41.109606212Z" level=info msg="Stopped container f50d93afe39d085ae45ebecc1ee7a3962eba5edc53b7a1fe92a3bd0ef0c04a05: ingress-nginx/ingress-nginx-controller-7559cbf597-g9jr9/controller" id=a7d3b0ab-a4ca-4aed-82b1-079a0651bd89 name=/runtime.v1.RuntimeService/StopContainer
	Aug 16 12:32:41 addons-606349 crio[958]: time="2024-08-16 12:32:41.110177029Z" level=info msg="Stopping pod sandbox: 564b0d9aace92aae26eaca29b82b7eb3212c8667b4eb1e460dd724d0938e69d2" id=4e66f9dd-1d0e-420c-975b-9557bf198ba1 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 16 12:32:41 addons-606349 crio[958]: time="2024-08-16 12:32:41.114028873Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-FOU5LPWIVFTXR25W - [0:0]\n:KUBE-HP-KVOHDJGT2KVPSQAX - [0:0]\n-X KUBE-HP-FOU5LPWIVFTXR25W\n-X KUBE-HP-KVOHDJGT2KVPSQAX\nCOMMIT\n"
	Aug 16 12:32:41 addons-606349 crio[958]: time="2024-08-16 12:32:41.115474758Z" level=info msg="Closing host port tcp:80"
	Aug 16 12:32:41 addons-606349 crio[958]: time="2024-08-16 12:32:41.115527073Z" level=info msg="Closing host port tcp:443"
	Aug 16 12:32:41 addons-606349 crio[958]: time="2024-08-16 12:32:41.116874301Z" level=info msg="Host port tcp:80 does not have an open socket"
	Aug 16 12:32:41 addons-606349 crio[958]: time="2024-08-16 12:32:41.116910051Z" level=info msg="Host port tcp:443 does not have an open socket"
	Aug 16 12:32:41 addons-606349 crio[958]: time="2024-08-16 12:32:41.117101581Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7559cbf597-g9jr9 Namespace:ingress-nginx ID:564b0d9aace92aae26eaca29b82b7eb3212c8667b4eb1e460dd724d0938e69d2 UID:42f3c3a2-f4f1-45b7-bc28-5f707d11e870 NetNS:/var/run/netns/10e502a4-ee4b-4f59-a990-b10f64e584bc Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Aug 16 12:32:41 addons-606349 crio[958]: time="2024-08-16 12:32:41.117246564Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7559cbf597-g9jr9 from CNI network \"kindnet\" (type=ptp)"
	Aug 16 12:32:41 addons-606349 crio[958]: time="2024-08-16 12:32:41.154342892Z" level=info msg="Stopped pod sandbox: 564b0d9aace92aae26eaca29b82b7eb3212c8667b4eb1e460dd724d0938e69d2" id=4e66f9dd-1d0e-420c-975b-9557bf198ba1 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 16 12:32:41 addons-606349 crio[958]: time="2024-08-16 12:32:41.256776854Z" level=info msg="Removing container: f50d93afe39d085ae45ebecc1ee7a3962eba5edc53b7a1fe92a3bd0ef0c04a05" id=6302e590-3b07-48e1-ab3a-49c2876df9bc name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 16 12:32:41 addons-606349 crio[958]: time="2024-08-16 12:32:41.271344458Z" level=info msg="Removed container f50d93afe39d085ae45ebecc1ee7a3962eba5edc53b7a1fe92a3bd0ef0c04a05: ingress-nginx/ingress-nginx-controller-7559cbf597-g9jr9/controller" id=6302e590-3b07-48e1-ab3a-49c2876df9bc name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                       ATTEMPT             POD ID              POD
	52b2678acd353       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        9 seconds ago       Running             hello-world-app            0                   e4c59245ac3a2       hello-world-app-55bf9c44b4-ktmlr
	ffbdceed8df31       docker.io/library/nginx@sha256:ba188f579f7a2638229e326e78c957a185630e303757813ef1ad7aac1b8248b6                              2 minutes ago       Running             nginx                      0                   e1db1efc0bbd6       nginx
	1745a4491518b       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          4 minutes ago       Running             busybox                    0                   413248ad425c2       busybox
	aaaf316c7eca3       nvcr.io/nvidia/k8s-device-plugin@sha256:cdd05f9d89f0552478d46474005e86b98795ad364664f644225b99d94978e680                     4 minutes ago       Running             nvidia-device-plugin-ctr   0                   ccc8e64410820       nvidia-device-plugin-daemonset-tlscx
	7e5eac37fa0b0       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:36d05b4077fb8e3d13663702fa337f124675ba8667cbd949c03a8e8ea6fa4366   5 minutes ago       Exited              patch                      0                   78789c2978da1       ingress-nginx-admission-patch-694rx
	c590a6cb80b1e       gcr.io/cloud-spanner-emulator/emulator@sha256:76d8c8cf50cb10809697c83120f51b216b49ea6538c15e083d843172d597774f               5 minutes ago       Running             cloud-spanner-emulator     0                   2506f308b0ef3       cloud-spanner-emulator-c4bc9b5f8-rqdqb
	e4c0ee4099d25       registry.k8s.io/metrics-server/metrics-server@sha256:7f0fc3565b6d4655d078bb8e250d0423d7c79aeb05fbc71e1ffa6ff664264d70        5 minutes ago       Running             metrics-server             0                   cd02c70718582       metrics-server-8988944d9-lfhc7
	6c9d9affb175a       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:36d05b4077fb8e3d13663702fa337f124675ba8667cbd949c03a8e8ea6fa4366   5 minutes ago       Exited              create                     0                   fe99186596fb8       ingress-nginx-admission-create-vvxkv
	54d4a78e74675       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                              5 minutes ago       Running             yakd                       0                   54a8205f4f8f7       yakd-dashboard-67d98fc6b-h8f8w
	a9f056a1a1096       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98             5 minutes ago       Running             local-path-provisioner     0                   da35a05534ea1       local-path-provisioner-86d989889c-jx4xd
	bbdc93411ee89       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93                                                             5 minutes ago       Running             coredns                    0                   cffeb7f91719f       coredns-6f6b679f8f-8ctjp
	21cba91f907bb       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             5 minutes ago       Running             storage-provisioner        0                   9885489f05ffd       storage-provisioner
	e9086ec7c6658       docker.io/kindest/kindnetd@sha256:4d39335073da9a0b82be8e01028f0aa75aff16caff2e2d8889d0effd579a6f64                           6 minutes ago       Running             kindnet-cni                0                   7300d870e06dd       kindnet-5jgmz
	c0d8bb8efc5a6       71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89                                                             6 minutes ago       Running             kube-proxy                 0                   f63c51380eace       kube-proxy-vjdhm
	20d8a65b34a90       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                             6 minutes ago       Running             etcd                       0                   393580ac3310e       etcd-addons-606349
	5b36378235e83       fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb                                                             6 minutes ago       Running             kube-scheduler             0                   3d52049ea4db5       kube-scheduler-addons-606349
	8254d00c3ba90       cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388                                                             6 minutes ago       Running             kube-apiserver             0                   ea2a34eb927e8       kube-apiserver-addons-606349
	5b54e04f88c26       fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd                                                             6 minutes ago       Running             kube-controller-manager    0                   8b0ea0c0fedc7       kube-controller-manager-addons-606349
	
	
	==> coredns [bbdc93411ee89a2417b6b1e8a74f87b93757d61d1ff049726d814f588c7bd2e8] <==
	[INFO] 10.244.0.18:56454 - 57187 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00251171s
	[INFO] 10.244.0.18:39595 - 13081 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000146198s
	[INFO] 10.244.0.18:39595 - 61982 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000092799s
	[INFO] 10.244.0.18:52789 - 35962 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000107454s
	[INFO] 10.244.0.18:52789 - 20350 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000153115s
	[INFO] 10.244.0.18:38983 - 3682 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000047589s
	[INFO] 10.244.0.18:38983 - 10848 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000034084s
	[INFO] 10.244.0.18:38226 - 38572 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00004667s
	[INFO] 10.244.0.18:38226 - 27055 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000034264s
	[INFO] 10.244.0.18:54811 - 51498 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001744363s
	[INFO] 10.244.0.18:54811 - 53284 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00136552s
	[INFO] 10.244.0.18:57115 - 46850 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000093431s
	[INFO] 10.244.0.18:57115 - 64284 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000044988s
	[INFO] 10.244.0.19:40719 - 43595 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000263875s
	[INFO] 10.244.0.19:37644 - 15119 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000410392s
	[INFO] 10.244.0.19:58790 - 54113 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00016863s
	[INFO] 10.244.0.19:33847 - 44098 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000091856s
	[INFO] 10.244.0.19:48670 - 33356 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000130936s
	[INFO] 10.244.0.19:53728 - 63183 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000239653s
	[INFO] 10.244.0.19:53416 - 55352 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003389405s
	[INFO] 10.244.0.19:48410 - 21450 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.004221703s
	[INFO] 10.244.0.19:42844 - 56534 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001826866s
	[INFO] 10.244.0.19:38615 - 30667 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002585662s
	[INFO] 10.244.0.22:52400 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00015684s
	[INFO] 10.244.0.22:36078 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000098173s
	
	
	==> describe nodes <==
	Name:               addons-606349
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-606349
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab84f9bc76071a77c857a14f5c66dccc01002b05
	                    minikube.k8s.io/name=addons-606349
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_16T12_26_18_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-606349
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 12:26:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-606349
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 12:32:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 12:30:52 +0000   Fri, 16 Aug 2024 12:26:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 12:30:52 +0000   Fri, 16 Aug 2024 12:26:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 12:30:52 +0000   Fri, 16 Aug 2024 12:26:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 12:30:52 +0000   Fri, 16 Aug 2024 12:27:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-606349
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022360Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022360Ki
	  pods:               110
	System Info:
	  Machine ID:                 7716b7cc286d4cd2b024d8361134384f
	  System UUID:                a1c189a3-b18b-4e19-b9eb-1cda8c1cacc5
	  Boot ID:                    cb16ac7a-0cca-4a0e-b7d0-05329bf090df
	  Kernel Version:             5.15.0-1067-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (16 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m5s
	  default                     cloud-spanner-emulator-c4bc9b5f8-rqdqb     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m20s
	  default                     hello-world-app-55bf9c44b4-ktmlr           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m31s
	  kube-system                 coredns-6f6b679f8f-8ctjp                   100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     6m24s
	  kube-system                 etcd-addons-606349                         100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         6m29s
	  kube-system                 kindnet-5jgmz                              100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      6m25s
	  kube-system                 kube-apiserver-addons-606349               250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m29s
	  kube-system                 kube-controller-manager-addons-606349      200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m29s
	  kube-system                 kube-proxy-vjdhm                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m24s
	  kube-system                 kube-scheduler-addons-606349               100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m29s
	  kube-system                 metrics-server-8988944d9-lfhc7             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m19s
	  kube-system                 nvidia-device-plugin-daemonset-tlscx       0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m38s
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m19s
	  local-path-storage          local-path-provisioner-86d989889c-jx4xd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m19s
	  yakd-dashboard              yakd-dashboard-67d98fc6b-h8f8w             0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     6m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             548Mi (6%)  476Mi (6%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 6m17s  kube-proxy       
	  Normal   Starting                 6m30s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m30s  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  6m29s  kubelet          Node addons-606349 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m29s  kubelet          Node addons-606349 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m29s  kubelet          Node addons-606349 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m25s  node-controller  Node addons-606349 event: Registered Node addons-606349 in Controller
	  Normal   NodeReady                5m38s  kubelet          Node addons-606349 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug16 10:02] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	[Aug16 11:25] FS-Cache: Duplicate cookie detected
	[  +0.000691] FS-Cache: O-cookie c=0000005a [p=00000002 fl=222 nc=0 na=1]
	[  +0.000926] FS-Cache: O-cookie d=00000000a864430e{9P.session} n=000000009bb6de5b
	[  +0.001091] FS-Cache: O-key=[10] '34333033313135373335'
	[  +0.000765] FS-Cache: N-cookie c=0000005b [p=00000002 fl=2 nc=0 na=1]
	[  +0.000894] FS-Cache: N-cookie d=00000000a864430e{9P.session} n=000000006a6ee473
	[  +0.001065] FS-Cache: N-key=[10] '34333033313135373335'
	[Aug16 11:58] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	[  +0.866060] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [20d8a65b34a9095a0855b56977c5799d94cf162bf5fc1ec0c7451fd646c0c928] <==
	{"level":"warn","ts":"2024-08-16T12:26:23.325124Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T12:26:22.810713Z","time spent":"513.884532ms","remote":"127.0.0.1:40674","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":7425,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-addons-606349\" mod_revision:299 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-addons-606349\" value_size:7362 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-addons-606349\" > >"}
	{"level":"info","ts":"2024-08-16T12:26:23.170145Z","caller":"traceutil/trace.go:171","msg":"trace[1433172012] transaction","detail":"{read_only:false; response_revision:330; number_of_response:1; }","duration":"510.694829ms","start":"2024-08-16T12:26:22.659437Z","end":"2024-08-16T12:26:23.170131Z","steps":["trace[1433172012] 'process raft request'  (duration: 495.866817ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T12:26:23.335901Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T12:26:22.659420Z","time spent":"676.253591ms","remote":"127.0.0.1:40580","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":669,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/kube-proxy.17ec3523154eae63\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-proxy.17ec3523154eae63\" value_size:595 lease:8128031248180017206 >> failure:<>"}
	{"level":"info","ts":"2024-08-16T12:26:23.170191Z","caller":"traceutil/trace.go:171","msg":"trace[661420716] transaction","detail":"{read_only:false; response_revision:331; number_of_response:1; }","duration":"349.997731ms","start":"2024-08-16T12:26:22.820168Z","end":"2024-08-16T12:26:23.170165Z","steps":["trace[661420716] 'process raft request'  (duration: 335.180952ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T12:26:23.337831Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T12:26:22.820149Z","time spent":"517.53014ms","remote":"127.0.0.1:40580","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":692,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/kube-proxy-vjdhm.17ec352330281631\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-proxy-vjdhm.17ec352330281631\" value_size:612 lease:8128031248180017206 >> failure:<>"}
	{"level":"warn","ts":"2024-08-16T12:26:23.345892Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.327106ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/coredns\" ","response":"range_response_count:1 size:612"}
	{"level":"info","ts":"2024-08-16T12:26:23.346067Z","caller":"traceutil/trace.go:171","msg":"trace[986765240] range","detail":"{range_begin:/registry/configmaps/kube-system/coredns; range_end:; response_count:1; response_revision:333; }","duration":"159.515773ms","start":"2024-08-16T12:26:23.186539Z","end":"2024-08-16T12:26:23.346055Z","steps":["trace[986765240] 'agreement among raft nodes before linearized reading'  (duration: 159.15992ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T12:26:23.170250Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"359.003454ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/kube-proxy\" ","response":"range_response_count:1 size:185"}
	{"level":"info","ts":"2024-08-16T12:26:23.346664Z","caller":"traceutil/trace.go:171","msg":"trace[1165937770] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kube-proxy; range_end:; response_count:1; response_revision:331; }","duration":"535.430571ms","start":"2024-08-16T12:26:22.811223Z","end":"2024-08-16T12:26:23.346653Z","steps":["trace[1165937770] 'agreement among raft nodes before linearized reading'  (duration: 358.979265ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T12:26:23.346706Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T12:26:22.811202Z","time spent":"535.490189ms","remote":"127.0.0.1:40700","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":1,"response size":209,"request content":"key:\"/registry/serviceaccounts/kube-system/kube-proxy\" "}
	{"level":"warn","ts":"2024-08-16T12:26:23.170284Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"359.645968ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-16T12:26:23.377963Z","caller":"traceutil/trace.go:171","msg":"trace[1751129751] range","detail":"{range_begin:/registry/namespaces; range_end:; response_count:0; response_revision:331; }","duration":"567.308378ms","start":"2024-08-16T12:26:22.810632Z","end":"2024-08-16T12:26:23.377940Z","steps":["trace[1751129751] 'agreement among raft nodes before linearized reading'  (duration: 359.635194ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T12:26:23.378419Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T12:26:22.810615Z","time spent":"567.503281ms","remote":"127.0.0.1:40602","response type":"/etcdserverpb.KV/Range","request count":0,"request size":24,"response count":0,"response size":29,"request content":"key:\"/registry/namespaces\" limit:1 "}
	{"level":"info","ts":"2024-08-16T12:26:23.170742Z","caller":"traceutil/trace.go:171","msg":"trace[48779832] transaction","detail":"{read_only:false; response_revision:333; number_of_response:1; }","duration":"121.521878ms","start":"2024-08-16T12:26:23.049203Z","end":"2024-08-16T12:26:23.170725Z","steps":["trace[48779832] 'process raft request'  (duration: 121.385338ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T12:26:23.170773Z","caller":"traceutil/trace.go:171","msg":"trace[81475285] transaction","detail":"{read_only:false; response_revision:332; number_of_response:1; }","duration":"174.702117ms","start":"2024-08-16T12:26:22.996061Z","end":"2024-08-16T12:26:23.170763Z","steps":["trace[81475285] 'process raft request'  (duration: 174.435871ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T12:26:23.395781Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T12:26:22.996042Z","time spent":"399.677091ms","remote":"127.0.0.1:40580","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":704,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/coredns-6f6b679f8f.17ec35233ef26562\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/coredns-6f6b679f8f.17ec35233ef26562\" value_size:622 lease:8128031248180017206 >> failure:<>"}
	{"level":"warn","ts":"2024-08-16T12:26:23.379741Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T12:26:23.049178Z","time spent":"330.520558ms","remote":"127.0.0.1:40674","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3505,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/coredns-6f6b679f8f-8ctjp\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-6f6b679f8f-8ctjp\" value_size:3446 >> failure:<>"}
	{"level":"info","ts":"2024-08-16T12:26:23.442194Z","caller":"traceutil/trace.go:171","msg":"trace[546240696] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-5jgmz; range_end:; response_count:1; response_revision:331; }","duration":"455.055598ms","start":"2024-08-16T12:26:22.810484Z","end":"2024-08-16T12:26:23.265540Z","steps":["trace[546240696] 'agreement among raft nodes before linearized reading'  (duration: 347.920414ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T12:26:23.449916Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T12:26:22.810437Z","time spent":"639.177965ms","remote":"127.0.0.1:40674","response type":"/etcdserverpb.KV/Range","request count":0,"request size":42,"response count":1,"response size":3713,"request content":"key:\"/registry/pods/kube-system/kindnet-5jgmz\" "}
	{"level":"info","ts":"2024-08-16T12:26:24.913517Z","caller":"traceutil/trace.go:171","msg":"trace[185896217] transaction","detail":"{read_only:false; response_revision:341; number_of_response:1; }","duration":"104.995525ms","start":"2024-08-16T12:26:24.808485Z","end":"2024-08-16T12:26:24.913481Z","steps":["trace[185896217] 'process raft request'  (duration: 81.652083ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T12:26:24.987912Z","caller":"traceutil/trace.go:171","msg":"trace[1717652546] transaction","detail":"{read_only:false; response_revision:342; number_of_response:1; }","duration":"160.954524ms","start":"2024-08-16T12:26:24.817926Z","end":"2024-08-16T12:26:24.978880Z","steps":["trace[1717652546] 'process raft request'  (duration: 131.68774ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T12:26:25.090390Z","caller":"traceutil/trace.go:171","msg":"trace[160194359] linearizableReadLoop","detail":"{readStateIndex:352; appliedIndex:352; }","duration":"139.510588ms","start":"2024-08-16T12:26:24.950859Z","end":"2024-08-16T12:26:25.090369Z","steps":["trace[160194359] 'read index received'  (duration: 139.501628ms)","trace[160194359] 'applied index is now lower than readState.Index'  (duration: 7.598µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-16T12:26:25.098425Z","caller":"traceutil/trace.go:171","msg":"trace[1348143978] transaction","detail":"{read_only:false; response_revision:343; number_of_response:1; }","duration":"147.46639ms","start":"2024-08-16T12:26:24.950937Z","end":"2024-08-16T12:26:25.098404Z","steps":["trace[1348143978] 'process raft request'  (duration: 147.168194ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T12:26:25.098548Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.660727ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-16T12:26:25.102781Z","caller":"traceutil/trace.go:171","msg":"trace[1234012080] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:343; }","duration":"151.913748ms","start":"2024-08-16T12:26:24.950855Z","end":"2024-08-16T12:26:25.102768Z","steps":["trace[1234012080] 'agreement among raft nodes before linearized reading'  (duration: 147.618922ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:32:46 up 10:15,  0 users,  load average: 0.41, 1.13, 1.83
	Linux addons-606349 5.15.0-1067-aws #73~20.04.1-Ubuntu SMP Wed Jul 24 17:31:05 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [e9086ec7c665816871c69a6b3c6e1521a95a6c3104276a412228ea68a619e2e4] <==
	I0816 12:31:38.238815       1 main.go:299] handling current node
	W0816 12:31:41.887396       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0816 12:31:41.887451       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	W0816 12:31:42.526681       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0816 12:31:42.526730       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	W0816 12:31:46.883952       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0816 12:31:46.883989       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0816 12:31:48.238479       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0816 12:31:48.238534       1 main.go:299] handling current node
	I0816 12:31:58.239105       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0816 12:31:58.239286       1 main.go:299] handling current node
	I0816 12:32:08.239167       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0816 12:32:08.239286       1 main.go:299] handling current node
	W0816 12:32:17.336270       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0816 12:32:17.336319       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0816 12:32:18.238688       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0816 12:32:18.238725       1 main.go:299] handling current node
	W0816 12:32:19.013984       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0816 12:32:19.014019       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	W0816 12:32:22.966663       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0816 12:32:22.966701       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0816 12:32:28.239203       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0816 12:32:28.239243       1 main.go:299] handling current node
	I0816 12:32:38.238792       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0816 12:32:38.238828       1 main.go:299] handling current node
	
	
	==> kube-apiserver [8254d00c3ba90768ddb8796f419aa64db2c8219f603c6ab2a9315a0148773d80] <==
	I0816 12:28:29.162184       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0816 12:28:51.635453       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:37768: use of closed network connection
	E0816 12:28:52.041028       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:37790: use of closed network connection
	E0816 12:29:00.842674       1 watch.go:250] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	E0816 12:29:16.096740       1 watch.go:250] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I0816 12:29:25.410773       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0816 12:30:02.524141       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0816 12:30:02.524275       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0816 12:30:02.551548       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0816 12:30:02.552849       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0816 12:30:02.572124       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0816 12:30:02.572261       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0816 12:30:02.671300       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0816 12:30:02.673290       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0816 12:30:02.696107       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0816 12:30:02.696219       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0816 12:30:03.671562       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0816 12:30:03.696921       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0816 12:30:03.714960       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0816 12:30:09.506995       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0816 12:30:10.550416       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0816 12:30:15.142756       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0816 12:30:15.461904       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.85.14"}
	I0816 12:32:35.369720       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.107.173.120"}
	E0816 12:32:38.014124       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [5b54e04f88c26416f2b2d8d3d73b7067bd8d2dafbbfd236b383241449b084690] <==
	W0816 12:31:22.829287       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 12:31:22.829333       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0816 12:31:28.471217       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 12:31:28.471264       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0816 12:31:32.293439       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 12:31:32.293484       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0816 12:31:32.423043       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 12:31:32.423089       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0816 12:32:11.190039       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 12:32:11.190087       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0816 12:32:18.790152       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 12:32:18.790196       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0816 12:32:28.267219       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 12:32:28.267264       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0816 12:32:29.454815       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 12:32:29.454858       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0816 12:32:35.122588       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="45.257904ms"
	I0816 12:32:35.134393       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="10.790992ms"
	I0816 12:32:35.147004       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="10.972611ms"
	I0816 12:32:35.147189       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="51.454µs"
	I0816 12:32:37.275289       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="17.173477ms"
	I0816 12:32:37.275478       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="40.18µs"
	I0816 12:32:37.927201       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0816 12:32:37.936324       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7559cbf597" duration="8.624µs"
	I0816 12:32:37.936654       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	
	
	==> kube-proxy [c0d8bb8efc5a6717ce3e0b4ffce376f1bfe1b809957f70faf484d2798e7fa960] <==
	I0816 12:26:27.416758       1 server_linux.go:66] "Using iptables proxy"
	I0816 12:26:28.411848       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0816 12:26:28.423262       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 12:26:28.595993       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0816 12:26:28.596061       1 server_linux.go:169] "Using iptables Proxier"
	I0816 12:26:28.598137       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 12:26:28.598633       1 server.go:483] "Version info" version="v1.31.0"
	I0816 12:26:28.598657       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 12:26:28.606140       1 config.go:197] "Starting service config controller"
	I0816 12:26:28.606174       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 12:26:28.606192       1 config.go:104] "Starting endpoint slice config controller"
	I0816 12:26:28.606196       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 12:26:28.606586       1 config.go:326] "Starting node config controller"
	I0816 12:26:28.606604       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 12:26:28.711699       1 shared_informer.go:320] Caches are synced for service config
	I0816 12:26:28.711763       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0816 12:26:28.712111       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5b36378235e83568da22d79bab9fd542bd2d8b811c3ac53446fdd2c0f8439b0a] <==
	W0816 12:26:14.457730       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 12:26:14.457796       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 12:26:14.457813       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 12:26:14.457906       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 12:26:14.457930       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 12:26:14.458001       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 12:26:14.457881       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 12:26:14.458109       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 12:26:14.457772       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 12:26:14.458211       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 12:26:14.457965       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 12:26:14.458309       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0816 12:26:14.457393       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 12:26:14.458394       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 12:26:14.459462       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 12:26:14.459573       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0816 12:26:15.283261       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 12:26:15.283388       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 12:26:15.341510       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 12:26:15.341907       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 12:26:15.350332       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 12:26:15.350477       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 12:26:15.514850       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 12:26:15.514974       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0816 12:26:17.449609       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 16 12:32:36 addons-606349 kubelet[1498]: I0816 12:32:36.405237    1498 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhr44\" (UniqueName: \"kubernetes.io/projected/ff0ffcea-ad8a-44e3-a010-29d571f3bd06-kube-api-access-zhr44\") pod \"ff0ffcea-ad8a-44e3-a010-29d571f3bd06\" (UID: \"ff0ffcea-ad8a-44e3-a010-29d571f3bd06\") "
	Aug 16 12:32:36 addons-606349 kubelet[1498]: I0816 12:32:36.413944    1498 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff0ffcea-ad8a-44e3-a010-29d571f3bd06-kube-api-access-zhr44" (OuterVolumeSpecName: "kube-api-access-zhr44") pod "ff0ffcea-ad8a-44e3-a010-29d571f3bd06" (UID: "ff0ffcea-ad8a-44e3-a010-29d571f3bd06"). InnerVolumeSpecName "kube-api-access-zhr44". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 16 12:32:36 addons-606349 kubelet[1498]: I0816 12:32:36.506896    1498 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-zhr44\" (UniqueName: \"kubernetes.io/projected/ff0ffcea-ad8a-44e3-a010-29d571f3bd06-kube-api-access-zhr44\") on node \"addons-606349\" DevicePath \"\""
	Aug 16 12:32:37 addons-606349 kubelet[1498]: I0816 12:32:37.245448    1498 scope.go:117] "RemoveContainer" containerID="d2c92b44534a2ede72f9b38cde81395e1f58bfab5abbaa8283e7302f95585fac"
	Aug 16 12:32:37 addons-606349 kubelet[1498]: I0816 12:32:37.271260    1498 scope.go:117] "RemoveContainer" containerID="d2c92b44534a2ede72f9b38cde81395e1f58bfab5abbaa8283e7302f95585fac"
	Aug 16 12:32:37 addons-606349 kubelet[1498]: E0816 12:32:37.271778    1498 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2c92b44534a2ede72f9b38cde81395e1f58bfab5abbaa8283e7302f95585fac\": container with ID starting with d2c92b44534a2ede72f9b38cde81395e1f58bfab5abbaa8283e7302f95585fac not found: ID does not exist" containerID="d2c92b44534a2ede72f9b38cde81395e1f58bfab5abbaa8283e7302f95585fac"
	Aug 16 12:32:37 addons-606349 kubelet[1498]: I0816 12:32:37.271819    1498 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2c92b44534a2ede72f9b38cde81395e1f58bfab5abbaa8283e7302f95585fac"} err="failed to get container status \"d2c92b44534a2ede72f9b38cde81395e1f58bfab5abbaa8283e7302f95585fac\": rpc error: code = NotFound desc = could not find container \"d2c92b44534a2ede72f9b38cde81395e1f58bfab5abbaa8283e7302f95585fac\": container with ID starting with d2c92b44534a2ede72f9b38cde81395e1f58bfab5abbaa8283e7302f95585fac not found: ID does not exist"
	Aug 16 12:32:37 addons-606349 kubelet[1498]: I0816 12:32:37.291746    1498 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-ktmlr" podStartSLOduration=1.20204191 podStartE2EDuration="2.291724329s" podCreationTimestamp="2024-08-16 12:32:35 +0000 UTC" firstStartedPulling="2024-08-16 12:32:35.482142967 +0000 UTC m=+378.588250861" lastFinishedPulling="2024-08-16 12:32:36.571825378 +0000 UTC m=+379.677933280" observedRunningTime="2024-08-16 12:32:37.256893268 +0000 UTC m=+380.363001162" watchObservedRunningTime="2024-08-16 12:32:37.291724329 +0000 UTC m=+380.397832223"
	Aug 16 12:32:37 addons-606349 kubelet[1498]: E0816 12:32:37.350280    1498 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723811557350054122,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:554761,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:32:37 addons-606349 kubelet[1498]: E0816 12:32:37.350315    1498 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723811557350054122,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:554761,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:32:39 addons-606349 kubelet[1498]: I0816 12:32:39.043611    1498 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a17e12bc-2f54-4261-83c6-994f43060d06" path="/var/lib/kubelet/pods/a17e12bc-2f54-4261-83c6-994f43060d06/volumes"
	Aug 16 12:32:39 addons-606349 kubelet[1498]: I0816 12:32:39.044033    1498 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea11d33f-61b0-4c41-a056-473e53a382da" path="/var/lib/kubelet/pods/ea11d33f-61b0-4c41-a056-473e53a382da/volumes"
	Aug 16 12:32:39 addons-606349 kubelet[1498]: I0816 12:32:39.044466    1498 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff0ffcea-ad8a-44e3-a010-29d571f3bd06" path="/var/lib/kubelet/pods/ff0ffcea-ad8a-44e3-a010-29d571f3bd06/volumes"
	Aug 16 12:32:41 addons-606349 kubelet[1498]: I0816 12:32:41.041279    1498 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-tlscx" secret="" err="secret \"gcp-auth\" not found"
	Aug 16 12:32:41 addons-606349 kubelet[1498]: I0816 12:32:41.255617    1498 scope.go:117] "RemoveContainer" containerID="f50d93afe39d085ae45ebecc1ee7a3962eba5edc53b7a1fe92a3bd0ef0c04a05"
	Aug 16 12:32:41 addons-606349 kubelet[1498]: I0816 12:32:41.271598    1498 scope.go:117] "RemoveContainer" containerID="f50d93afe39d085ae45ebecc1ee7a3962eba5edc53b7a1fe92a3bd0ef0c04a05"
	Aug 16 12:32:41 addons-606349 kubelet[1498]: E0816 12:32:41.272051    1498 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f50d93afe39d085ae45ebecc1ee7a3962eba5edc53b7a1fe92a3bd0ef0c04a05\": container with ID starting with f50d93afe39d085ae45ebecc1ee7a3962eba5edc53b7a1fe92a3bd0ef0c04a05 not found: ID does not exist" containerID="f50d93afe39d085ae45ebecc1ee7a3962eba5edc53b7a1fe92a3bd0ef0c04a05"
	Aug 16 12:32:41 addons-606349 kubelet[1498]: I0816 12:32:41.272088    1498 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f50d93afe39d085ae45ebecc1ee7a3962eba5edc53b7a1fe92a3bd0ef0c04a05"} err="failed to get container status \"f50d93afe39d085ae45ebecc1ee7a3962eba5edc53b7a1fe92a3bd0ef0c04a05\": rpc error: code = NotFound desc = could not find container \"f50d93afe39d085ae45ebecc1ee7a3962eba5edc53b7a1fe92a3bd0ef0c04a05\": container with ID starting with f50d93afe39d085ae45ebecc1ee7a3962eba5edc53b7a1fe92a3bd0ef0c04a05 not found: ID does not exist"
	Aug 16 12:32:41 addons-606349 kubelet[1498]: I0816 12:32:41.341373    1498 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/42f3c3a2-f4f1-45b7-bc28-5f707d11e870-webhook-cert\") pod \"42f3c3a2-f4f1-45b7-bc28-5f707d11e870\" (UID: \"42f3c3a2-f4f1-45b7-bc28-5f707d11e870\") "
	Aug 16 12:32:41 addons-606349 kubelet[1498]: I0816 12:32:41.341436    1498 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7k5p\" (UniqueName: \"kubernetes.io/projected/42f3c3a2-f4f1-45b7-bc28-5f707d11e870-kube-api-access-j7k5p\") pod \"42f3c3a2-f4f1-45b7-bc28-5f707d11e870\" (UID: \"42f3c3a2-f4f1-45b7-bc28-5f707d11e870\") "
	Aug 16 12:32:41 addons-606349 kubelet[1498]: I0816 12:32:41.343897    1498 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42f3c3a2-f4f1-45b7-bc28-5f707d11e870-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "42f3c3a2-f4f1-45b7-bc28-5f707d11e870" (UID: "42f3c3a2-f4f1-45b7-bc28-5f707d11e870"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 16 12:32:41 addons-606349 kubelet[1498]: I0816 12:32:41.345924    1498 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42f3c3a2-f4f1-45b7-bc28-5f707d11e870-kube-api-access-j7k5p" (OuterVolumeSpecName: "kube-api-access-j7k5p") pod "42f3c3a2-f4f1-45b7-bc28-5f707d11e870" (UID: "42f3c3a2-f4f1-45b7-bc28-5f707d11e870"). InnerVolumeSpecName "kube-api-access-j7k5p". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 16 12:32:41 addons-606349 kubelet[1498]: I0816 12:32:41.441964    1498 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-j7k5p\" (UniqueName: \"kubernetes.io/projected/42f3c3a2-f4f1-45b7-bc28-5f707d11e870-kube-api-access-j7k5p\") on node \"addons-606349\" DevicePath \"\""
	Aug 16 12:32:41 addons-606349 kubelet[1498]: I0816 12:32:41.442015    1498 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/42f3c3a2-f4f1-45b7-bc28-5f707d11e870-webhook-cert\") on node \"addons-606349\" DevicePath \"\""
	Aug 16 12:32:43 addons-606349 kubelet[1498]: I0816 12:32:43.043466    1498 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42f3c3a2-f4f1-45b7-bc28-5f707d11e870" path="/var/lib/kubelet/pods/42f3c3a2-f4f1-45b7-bc28-5f707d11e870/volumes"
	
	
	==> storage-provisioner [21cba91f907bb4abdbf83f51e5c492db7ff92a47790e579450b57efe1e853126] <==
	I0816 12:27:09.210683       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0816 12:27:09.268808       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0816 12:27:09.268932       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0816 12:27:09.282317       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0816 12:27:09.282689       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-606349_487ba23a-6f7c-42db-9bd9-8e545be5ba0a!
	I0816 12:27:09.282380       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a29a483c-3c46-40f9-9b11-c30ec8e820c9", APIVersion:"v1", ResourceVersion:"891", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-606349_487ba23a-6f7c-42db-9bd9-8e545be5ba0a became leader
	I0816 12:27:09.412252       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-606349_487ba23a-6f7c-42db-9bd9-8e545be5ba0a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-606349 -n addons-606349
helpers_test.go:261: (dbg) Run:  kubectl --context addons-606349 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (152.76s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (304.47s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 12.231199ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-8988944d9-lfhc7" [93c15fce-49db-484e-817d-4f2f088bd4e5] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004265671s
addons_test.go:417: (dbg) Run:  kubectl --context addons-606349 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-606349 top pods -n kube-system: exit status 1 (100.726359ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-8ctjp, age: 3m1.592103193s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-606349 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-606349 top pods -n kube-system: exit status 1 (102.244591ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-8ctjp, age: 3m5.181586183s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-606349 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-606349 top pods -n kube-system: exit status 1 (120.781645ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-8ctjp, age: 3m11.238950143s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-606349 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-606349 top pods -n kube-system: exit status 1 (120.800033ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-8ctjp, age: 3m20.388225391s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-606349 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-606349 top pods -n kube-system: exit status 1 (117.216267ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-8ctjp, age: 3m32.564126742s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-606349 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-606349 top pods -n kube-system: exit status 1 (93.753934ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-8ctjp, age: 3m51.458382706s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-606349 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-606349 top pods -n kube-system: exit status 1 (83.918055ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-8ctjp, age: 4m18.811516123s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-606349 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-606349 top pods -n kube-system: exit status 1 (90.873201ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-8ctjp, age: 5m3.695172482s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-606349 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-606349 top pods -n kube-system: exit status 1 (91.236169ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-8ctjp, age: 6m8.397332524s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-606349 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-606349 top pods -n kube-system: exit status 1 (107.50119ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-8ctjp, age: 6m52.806478446s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-606349 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-606349 top pods -n kube-system: exit status 1 (87.084428ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-8ctjp, age: 7m56.843105733s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-606349 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-606349
helpers_test.go:235: (dbg) docker inspect addons-606349:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "00fb883fa653b16a5c6a3d4eaeeb799046b2388cf8d7532d6e9254c4f46b6473",
	        "Created": "2024-08-16T12:25:51.656195826Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1387978,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-16T12:25:51.797419245Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2b339a1cac4376103734d3066f7ccdf0ac7377a2f8f8d5eb9e81c29f3abcec50",
	        "ResolvConfPath": "/var/lib/docker/containers/00fb883fa653b16a5c6a3d4eaeeb799046b2388cf8d7532d6e9254c4f46b6473/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/00fb883fa653b16a5c6a3d4eaeeb799046b2388cf8d7532d6e9254c4f46b6473/hostname",
	        "HostsPath": "/var/lib/docker/containers/00fb883fa653b16a5c6a3d4eaeeb799046b2388cf8d7532d6e9254c4f46b6473/hosts",
	        "LogPath": "/var/lib/docker/containers/00fb883fa653b16a5c6a3d4eaeeb799046b2388cf8d7532d6e9254c4f46b6473/00fb883fa653b16a5c6a3d4eaeeb799046b2388cf8d7532d6e9254c4f46b6473-json.log",
	        "Name": "/addons-606349",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-606349:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-606349",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c37d11e535a7cad379d63a191e42f295021a6bf1fbb6115a319824a188b5c48b-init/diff:/var/lib/docker/overlay2/287088eb3e5bb39feac9f608f19b8b2d9575f8872ab339d74583c457d8cec343/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c37d11e535a7cad379d63a191e42f295021a6bf1fbb6115a319824a188b5c48b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c37d11e535a7cad379d63a191e42f295021a6bf1fbb6115a319824a188b5c48b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c37d11e535a7cad379d63a191e42f295021a6bf1fbb6115a319824a188b5c48b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-606349",
	                "Source": "/var/lib/docker/volumes/addons-606349/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-606349",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-606349",
	                "name.minikube.sigs.k8s.io": "addons-606349",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "498fa5ba924fef32fe6be2aa7de6a03e13b2d90a4f2fe3fe315ab2f3e4eaa7da",
	            "SandboxKey": "/var/run/docker/netns/498fa5ba924f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34595"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34596"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34599"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34597"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34598"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-606349": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "327cc4c0f93e957099f42f5df5695645a067d1bd5cae73d86f837e1db675491d",
	                    "EndpointID": "8260839433a235518c970e8f26be3c344496d9efb8d679797cca94a5e67a26c4",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-606349",
	                        "00fb883fa653"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-606349 -n addons-606349
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-606349 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-606349 logs -n 25: (1.464130438s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | --download-only -p                                                                          | download-docker-288613 | jenkins | v1.33.1 | 16 Aug 24 12:25 UTC |                     |
	|         | download-docker-288613                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-288613                                                                   | download-docker-288613 | jenkins | v1.33.1 | 16 Aug 24 12:25 UTC | 16 Aug 24 12:25 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-169699   | jenkins | v1.33.1 | 16 Aug 24 12:25 UTC |                     |
	|         | binary-mirror-169699                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:34739                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-169699                                                                     | binary-mirror-169699   | jenkins | v1.33.1 | 16 Aug 24 12:25 UTC | 16 Aug 24 12:25 UTC |
	| addons  | enable dashboard -p                                                                         | addons-606349          | jenkins | v1.33.1 | 16 Aug 24 12:25 UTC |                     |
	|         | addons-606349                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-606349          | jenkins | v1.33.1 | 16 Aug 24 12:25 UTC |                     |
	|         | addons-606349                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-606349 --wait=true                                                                | addons-606349          | jenkins | v1.33.1 | 16 Aug 24 12:25 UTC | 16 Aug 24 12:28 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | addons-606349 addons disable                                                                | addons-606349          | jenkins | v1.33.1 | 16 Aug 24 12:28 UTC | 16 Aug 24 12:29 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-606349 ip                                                                            | addons-606349          | jenkins | v1.33.1 | 16 Aug 24 12:29 UTC | 16 Aug 24 12:29 UTC |
	| addons  | addons-606349 addons disable                                                                | addons-606349          | jenkins | v1.33.1 | 16 Aug 24 12:29 UTC | 16 Aug 24 12:29 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-606349 addons                                                                        | addons-606349          | jenkins | v1.33.1 | 16 Aug 24 12:29 UTC | 16 Aug 24 12:30 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-606349 addons                                                                        | addons-606349          | jenkins | v1.33.1 | 16 Aug 24 12:30 UTC | 16 Aug 24 12:30 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-606349          | jenkins | v1.33.1 | 16 Aug 24 12:30 UTC | 16 Aug 24 12:30 UTC |
	|         | addons-606349                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-606349 ssh curl -s                                                                   | addons-606349          | jenkins | v1.33.1 | 16 Aug 24 12:30 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-606349 ip                                                                            | addons-606349          | jenkins | v1.33.1 | 16 Aug 24 12:32 UTC | 16 Aug 24 12:32 UTC |
	| addons  | addons-606349 addons disable                                                                | addons-606349          | jenkins | v1.33.1 | 16 Aug 24 12:32 UTC | 16 Aug 24 12:32 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-606349 addons disable                                                                | addons-606349          | jenkins | v1.33.1 | 16 Aug 24 12:32 UTC | 16 Aug 24 12:32 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| ssh     | addons-606349 ssh cat                                                                       | addons-606349          | jenkins | v1.33.1 | 16 Aug 24 12:32 UTC | 16 Aug 24 12:32 UTC |
	|         | /opt/local-path-provisioner/pvc-b19a7cb6-3608-45c2-ba67-caaddd2e79d9_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-606349 addons disable                                                                | addons-606349          | jenkins | v1.33.1 | 16 Aug 24 12:32 UTC | 16 Aug 24 12:32 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-606349 addons disable                                                                | addons-606349          | jenkins | v1.33.1 | 16 Aug 24 12:33 UTC | 16 Aug 24 12:33 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-606349          | jenkins | v1.33.1 | 16 Aug 24 12:33 UTC | 16 Aug 24 12:33 UTC |
	|         | -p addons-606349                                                                            |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-606349          | jenkins | v1.33.1 | 16 Aug 24 12:33 UTC | 16 Aug 24 12:33 UTC |
	|         | addons-606349                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-606349          | jenkins | v1.33.1 | 16 Aug 24 12:33 UTC | 16 Aug 24 12:33 UTC |
	|         | -p addons-606349                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-606349 addons disable                                                                | addons-606349          | jenkins | v1.33.1 | 16 Aug 24 12:33 UTC | 16 Aug 24 12:33 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-606349 addons                                                                        | addons-606349          | jenkins | v1.33.1 | 16 Aug 24 12:34 UTC | 16 Aug 24 12:34 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 12:25:26
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 12:25:26.244246 1387479 out.go:345] Setting OutFile to fd 1 ...
	I0816 12:25:26.244693 1387479 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 12:25:26.244740 1387479 out.go:358] Setting ErrFile to fd 2...
	I0816 12:25:26.244762 1387479 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 12:25:26.245074 1387479 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-1381335/.minikube/bin
	I0816 12:25:26.245612 1387479 out.go:352] Setting JSON to false
	I0816 12:25:26.246574 1387479 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":36470,"bootTime":1723774657,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0816 12:25:26.246688 1387479 start.go:139] virtualization:  
	I0816 12:25:26.249289 1387479 out.go:177] * [addons-606349] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0816 12:25:26.251202 1387479 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 12:25:26.251282 1387479 notify.go:220] Checking for updates...
	I0816 12:25:26.254687 1387479 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 12:25:26.256499 1387479 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-1381335/kubeconfig
	I0816 12:25:26.258399 1387479 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-1381335/.minikube
	I0816 12:25:26.260433 1387479 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0816 12:25:26.262108 1387479 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 12:25:26.264250 1387479 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 12:25:26.285892 1387479 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0816 12:25:26.286019 1387479 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0816 12:25:26.354490 1387479 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-16 12:25:26.344579173 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0816 12:25:26.354607 1387479 docker.go:307] overlay module found
	I0816 12:25:26.356669 1387479 out.go:177] * Using the docker driver based on user configuration
	I0816 12:25:26.358377 1387479 start.go:297] selected driver: docker
	I0816 12:25:26.358392 1387479 start.go:901] validating driver "docker" against <nil>
	I0816 12:25:26.358407 1387479 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 12:25:26.359012 1387479 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0816 12:25:26.409973 1387479 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-16 12:25:26.401233542 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0816 12:25:26.410138 1387479 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 12:25:26.410365 1387479 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 12:25:26.412053 1387479 out.go:177] * Using Docker driver with root privileges
	I0816 12:25:26.413675 1387479 cni.go:84] Creating CNI manager for ""
	I0816 12:25:26.413699 1387479 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0816 12:25:26.413711 1387479 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0816 12:25:26.413839 1387479 start.go:340] cluster config:
	{Name:addons-606349 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-606349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 12:25:26.415811 1387479 out.go:177] * Starting "addons-606349" primary control-plane node in "addons-606349" cluster
	I0816 12:25:26.417573 1387479 cache.go:121] Beginning downloading kic base image for docker with crio
	I0816 12:25:26.419346 1387479 out.go:177] * Pulling base image v0.0.44-1723650208-19443 ...
	I0816 12:25:26.421162 1387479 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 12:25:26.421218 1387479 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-1381335/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4
	I0816 12:25:26.421233 1387479 cache.go:56] Caching tarball of preloaded images
	I0816 12:25:26.421256 1387479 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local docker daemon
	I0816 12:25:26.421317 1387479 preload.go:172] Found /home/jenkins/minikube-integration/19423-1381335/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0816 12:25:26.421327 1387479 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0816 12:25:26.421672 1387479 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/config.json ...
	I0816 12:25:26.421704 1387479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/config.json: {Name:mk0b81af05dcdc24aa88b9fad79390a8f27be4ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:25:26.436293 1387479 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 to local cache
	I0816 12:25:26.436422 1387479 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local cache directory
	I0816 12:25:26.436442 1387479 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local cache directory, skipping pull
	I0816 12:25:26.436447 1387479 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 exists in cache, skipping pull
	I0816 12:25:26.436455 1387479 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 as a tarball
	I0816 12:25:26.436461 1387479 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 from local cache
	I0816 12:25:43.776811 1387479 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 from cached tarball
	I0816 12:25:43.776853 1387479 cache.go:194] Successfully downloaded all kic artifacts
	I0816 12:25:43.776899 1387479 start.go:360] acquireMachinesLock for addons-606349: {Name:mk868a0d8a6549768fa50c40f10f574b8d2ed4ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 12:25:43.777032 1387479 start.go:364] duration metric: took 109.645µs to acquireMachinesLock for "addons-606349"
	I0816 12:25:43.777079 1387479 start.go:93] Provisioning new machine with config: &{Name:addons-606349 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-606349 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 12:25:43.777166 1387479 start.go:125] createHost starting for "" (driver="docker")
	I0816 12:25:43.779574 1387479 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0816 12:25:43.779838 1387479 start.go:159] libmachine.API.Create for "addons-606349" (driver="docker")
	I0816 12:25:43.779877 1387479 client.go:168] LocalClient.Create starting
	I0816 12:25:43.780013 1387479 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19423-1381335/.minikube/certs/ca.pem
	I0816 12:25:44.067271 1387479 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19423-1381335/.minikube/certs/cert.pem
	I0816 12:25:45.207143 1387479 cli_runner.go:164] Run: docker network inspect addons-606349 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0816 12:25:45.226413 1387479 cli_runner.go:211] docker network inspect addons-606349 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0816 12:25:45.226525 1387479 network_create.go:284] running [docker network inspect addons-606349] to gather additional debugging logs...
	I0816 12:25:45.226552 1387479 cli_runner.go:164] Run: docker network inspect addons-606349
	W0816 12:25:45.244182 1387479 cli_runner.go:211] docker network inspect addons-606349 returned with exit code 1
	I0816 12:25:45.244224 1387479 network_create.go:287] error running [docker network inspect addons-606349]: docker network inspect addons-606349: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-606349 not found
	I0816 12:25:45.244239 1387479 network_create.go:289] output of [docker network inspect addons-606349]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-606349 not found
	
	** /stderr **
	I0816 12:25:45.244358 1387479 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0816 12:25:45.264463 1387479 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40017b8890}
	I0816 12:25:45.264518 1387479 network_create.go:124] attempt to create docker network addons-606349 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0816 12:25:45.264596 1387479 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-606349 addons-606349
	I0816 12:25:45.366072 1387479 network_create.go:108] docker network addons-606349 192.168.49.0/24 created
	I0816 12:25:45.366122 1387479 kic.go:121] calculated static IP "192.168.49.2" for the "addons-606349" container
	I0816 12:25:45.366229 1387479 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0816 12:25:45.384830 1387479 cli_runner.go:164] Run: docker volume create addons-606349 --label name.minikube.sigs.k8s.io=addons-606349 --label created_by.minikube.sigs.k8s.io=true
	I0816 12:25:45.410123 1387479 oci.go:103] Successfully created a docker volume addons-606349
	I0816 12:25:45.410339 1387479 cli_runner.go:164] Run: docker run --rm --name addons-606349-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-606349 --entrypoint /usr/bin/test -v addons-606349:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 -d /var/lib
	I0816 12:25:47.526836 1387479 cli_runner.go:217] Completed: docker run --rm --name addons-606349-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-606349 --entrypoint /usr/bin/test -v addons-606349:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 -d /var/lib: (2.1164221s)
	I0816 12:25:47.526865 1387479 oci.go:107] Successfully prepared a docker volume addons-606349
	I0816 12:25:47.526885 1387479 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 12:25:47.526905 1387479 kic.go:194] Starting extracting preloaded images to volume ...
	I0816 12:25:47.526984 1387479 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19423-1381335/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-606349:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 -I lz4 -xf /preloaded.tar -C /extractDir
	I0816 12:25:51.585348 1387479 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19423-1381335/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-606349:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 -I lz4 -xf /preloaded.tar -C /extractDir: (4.058321851s)
	I0816 12:25:51.585381 1387479 kic.go:203] duration metric: took 4.058473184s to extract preloaded images to volume ...
	W0816 12:25:51.585513 1387479 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0816 12:25:51.585640 1387479 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0816 12:25:51.642591 1387479 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-606349 --name addons-606349 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-606349 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-606349 --network addons-606349 --ip 192.168.49.2 --volume addons-606349:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002
	I0816 12:25:51.971355 1387479 cli_runner.go:164] Run: docker container inspect addons-606349 --format={{.State.Running}}
	I0816 12:25:51.990595 1387479 cli_runner.go:164] Run: docker container inspect addons-606349 --format={{.State.Status}}
	I0816 12:25:52.016857 1387479 cli_runner.go:164] Run: docker exec addons-606349 stat /var/lib/dpkg/alternatives/iptables
	I0816 12:25:52.086314 1387479 oci.go:144] the created container "addons-606349" has a running status.
	I0816 12:25:52.086346 1387479 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19423-1381335/.minikube/machines/addons-606349/id_rsa...
	I0816 12:25:52.548884 1387479 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19423-1381335/.minikube/machines/addons-606349/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0816 12:25:52.589414 1387479 cli_runner.go:164] Run: docker container inspect addons-606349 --format={{.State.Status}}
	I0816 12:25:52.631974 1387479 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0816 12:25:52.631993 1387479 kic_runner.go:114] Args: [docker exec --privileged addons-606349 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0816 12:25:52.729928 1387479 cli_runner.go:164] Run: docker container inspect addons-606349 --format={{.State.Status}}
	I0816 12:25:52.748168 1387479 machine.go:93] provisionDockerMachine start ...
	I0816 12:25:52.748290 1387479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606349
	I0816 12:25:52.769078 1387479 main.go:141] libmachine: Using SSH client type: native
	I0816 12:25:52.769442 1387479 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 34595 <nil> <nil>}
	I0816 12:25:52.769461 1387479 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 12:25:52.925415 1387479 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-606349
	
	I0816 12:25:52.925442 1387479 ubuntu.go:169] provisioning hostname "addons-606349"
	I0816 12:25:52.925511 1387479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606349
	I0816 12:25:52.947337 1387479 main.go:141] libmachine: Using SSH client type: native
	I0816 12:25:52.947592 1387479 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 34595 <nil> <nil>}
	I0816 12:25:52.947604 1387479 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-606349 && echo "addons-606349" | sudo tee /etc/hostname
	I0816 12:25:53.113126 1387479 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-606349
	
	I0816 12:25:53.113265 1387479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606349
	I0816 12:25:53.131666 1387479 main.go:141] libmachine: Using SSH client type: native
	I0816 12:25:53.131905 1387479 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 34595 <nil> <nil>}
	I0816 12:25:53.131922 1387479 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-606349' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-606349/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-606349' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 12:25:53.269843 1387479 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 12:25:53.269874 1387479 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19423-1381335/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-1381335/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-1381335/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-1381335/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-1381335/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-1381335/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-1381335/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-1381335/.minikube}
	I0816 12:25:53.269911 1387479 ubuntu.go:177] setting up certificates
	I0816 12:25:53.269921 1387479 provision.go:84] configureAuth start
	I0816 12:25:53.269984 1387479 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-606349
	I0816 12:25:53.287073 1387479 provision.go:143] copyHostCerts
	I0816 12:25:53.287162 1387479 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-1381335/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-1381335/.minikube/key.pem (1679 bytes)
	I0816 12:25:53.287292 1387479 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-1381335/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-1381335/.minikube/ca.pem (1078 bytes)
	I0816 12:25:53.287359 1387479 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-1381335/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-1381335/.minikube/cert.pem (1123 bytes)
	I0816 12:25:53.287413 1387479 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-1381335/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-1381335/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-1381335/.minikube/certs/ca-key.pem org=jenkins.addons-606349 san=[127.0.0.1 192.168.49.2 addons-606349 localhost minikube]
	I0816 12:25:55.008806 1387479 provision.go:177] copyRemoteCerts
	I0816 12:25:55.008898 1387479 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 12:25:55.008952 1387479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606349
	I0816 12:25:55.030478 1387479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34595 SSHKeyPath:/home/jenkins/minikube-integration/19423-1381335/.minikube/machines/addons-606349/id_rsa Username:docker}
	I0816 12:25:55.127227 1387479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1381335/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 12:25:55.153005 1387479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1381335/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0816 12:25:55.179004 1387479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1381335/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 12:25:55.204145 1387479 provision.go:87] duration metric: took 1.93420761s to configureAuth
	I0816 12:25:55.204175 1387479 ubuntu.go:193] setting minikube options for container-runtime
	I0816 12:25:55.204368 1387479 config.go:182] Loaded profile config "addons-606349": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 12:25:55.204484 1387479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606349
	I0816 12:25:55.221601 1387479 main.go:141] libmachine: Using SSH client type: native
	I0816 12:25:55.221867 1387479 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 34595 <nil> <nil>}
	I0816 12:25:55.221890 1387479 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 12:25:55.457148 1387479 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 12:25:55.457175 1387479 machine.go:96] duration metric: took 2.708984616s to provisionDockerMachine
	I0816 12:25:55.457186 1387479 client.go:171] duration metric: took 11.677299475s to LocalClient.Create
	I0816 12:25:55.457199 1387479 start.go:167] duration metric: took 11.677363294s to libmachine.API.Create "addons-606349"
	I0816 12:25:55.457208 1387479 start.go:293] postStartSetup for "addons-606349" (driver="docker")
	I0816 12:25:55.457218 1387479 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 12:25:55.457286 1387479 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 12:25:55.457332 1387479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606349
	I0816 12:25:55.474153 1387479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34595 SSHKeyPath:/home/jenkins/minikube-integration/19423-1381335/.minikube/machines/addons-606349/id_rsa Username:docker}
	I0816 12:25:55.571167 1387479 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 12:25:55.574544 1387479 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0816 12:25:55.574583 1387479 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0816 12:25:55.574594 1387479 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0816 12:25:55.574601 1387479 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0816 12:25:55.574612 1387479 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-1381335/.minikube/addons for local assets ...
	I0816 12:25:55.574688 1387479 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-1381335/.minikube/files for local assets ...
	I0816 12:25:55.574713 1387479 start.go:296] duration metric: took 117.500173ms for postStartSetup
	I0816 12:25:55.575035 1387479 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-606349
	I0816 12:25:55.590591 1387479 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/config.json ...
	I0816 12:25:55.590894 1387479 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 12:25:55.590946 1387479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606349
	I0816 12:25:55.606849 1387479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34595 SSHKeyPath:/home/jenkins/minikube-integration/19423-1381335/.minikube/machines/addons-606349/id_rsa Username:docker}
	I0816 12:25:55.698649 1387479 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0816 12:25:55.703305 1387479 start.go:128] duration metric: took 11.926122525s to createHost
	I0816 12:25:55.703332 1387479 start.go:83] releasing machines lock for "addons-606349", held for 11.926285896s
	I0816 12:25:55.703446 1387479 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-606349
	I0816 12:25:55.720930 1387479 ssh_runner.go:195] Run: cat /version.json
	I0816 12:25:55.720992 1387479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606349
	I0816 12:25:55.721253 1387479 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 12:25:55.721302 1387479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606349
	I0816 12:25:55.746420 1387479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34595 SSHKeyPath:/home/jenkins/minikube-integration/19423-1381335/.minikube/machines/addons-606349/id_rsa Username:docker}
	I0816 12:25:55.760655 1387479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34595 SSHKeyPath:/home/jenkins/minikube-integration/19423-1381335/.minikube/machines/addons-606349/id_rsa Username:docker}
	I0816 12:25:55.841641 1387479 ssh_runner.go:195] Run: systemctl --version
	I0816 12:25:55.967182 1387479 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 12:25:56.113965 1387479 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0816 12:25:56.118239 1387479 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 12:25:56.138549 1387479 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0816 12:25:56.138624 1387479 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 12:25:56.173080 1387479 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0816 12:25:56.173106 1387479 start.go:495] detecting cgroup driver to use...
	I0816 12:25:56.173139 1387479 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0816 12:25:56.173190 1387479 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 12:25:56.190762 1387479 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 12:25:56.202960 1387479 docker.go:217] disabling cri-docker service (if available) ...
	I0816 12:25:56.203027 1387479 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 12:25:56.217925 1387479 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 12:25:56.233459 1387479 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 12:25:56.334115 1387479 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 12:25:56.430610 1387479 docker.go:233] disabling docker service ...
	I0816 12:25:56.430701 1387479 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 12:25:56.451721 1387479 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 12:25:56.464002 1387479 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 12:25:56.551221 1387479 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 12:25:56.651451 1387479 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 12:25:56.663024 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 12:25:56.679596 1387479 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 12:25:56.679665 1387479 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:25:56.691059 1387479 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 12:25:56.691171 1387479 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:25:56.700961 1387479 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:25:56.710933 1387479 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:25:56.720867 1387479 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 12:25:56.730366 1387479 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:25:56.740168 1387479 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:25:56.755971 1387479 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 12:25:56.766391 1387479 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 12:25:56.775490 1387479 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 12:25:56.784554 1387479 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 12:25:56.863365 1387479 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 12:25:56.988848 1387479 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 12:25:56.988997 1387479 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 12:25:56.992514 1387479 start.go:563] Will wait 60s for crictl version
	I0816 12:25:56.992583 1387479 ssh_runner.go:195] Run: which crictl
	I0816 12:25:56.995969 1387479 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 12:25:57.038886 1387479 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0816 12:25:57.039007 1387479 ssh_runner.go:195] Run: crio --version
	I0816 12:25:57.081443 1387479 ssh_runner.go:195] Run: crio --version
	I0816 12:25:57.122218 1387479 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.24.6 ...
	I0816 12:25:57.124051 1387479 cli_runner.go:164] Run: docker network inspect addons-606349 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0816 12:25:57.140005 1387479 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0816 12:25:57.143844 1387479 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 12:25:57.155902 1387479 kubeadm.go:883] updating cluster {Name:addons-606349 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-606349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 12:25:57.156024 1387479 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 12:25:57.156085 1387479 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 12:25:57.232976 1387479 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 12:25:57.233003 1387479 crio.go:433] Images already preloaded, skipping extraction
	I0816 12:25:57.233066 1387479 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 12:25:57.269605 1387479 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 12:25:57.269632 1387479 cache_images.go:84] Images are preloaded, skipping loading
	I0816 12:25:57.269641 1387479 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 crio true true} ...
	I0816 12:25:57.269821 1387479 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-606349 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-606349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 12:25:57.269923 1387479 ssh_runner.go:195] Run: crio config
	I0816 12:25:57.317966 1387479 cni.go:84] Creating CNI manager for ""
	I0816 12:25:57.317987 1387479 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0816 12:25:57.317997 1387479 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 12:25:57.318047 1387479 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-606349 NodeName:addons-606349 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 12:25:57.318222 1387479 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-606349"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 12:25:57.318300 1387479 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 12:25:57.327302 1387479 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 12:25:57.327399 1387479 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 12:25:57.336191 1387479 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0816 12:25:57.354460 1387479 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 12:25:57.373469 1387479 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0816 12:25:57.391630 1387479 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0816 12:25:57.394991 1387479 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 12:25:57.405830 1387479 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 12:25:57.487288 1387479 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 12:25:57.501171 1387479 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349 for IP: 192.168.49.2
	I0816 12:25:57.501196 1387479 certs.go:194] generating shared ca certs ...
	I0816 12:25:57.501241 1387479 certs.go:226] acquiring lock for ca certs: {Name:mkdf245990f96a1e9a969aa18ae3f00f60af8904 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:25:57.501406 1387479 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-1381335/.minikube/ca.key
	I0816 12:25:57.773948 1387479 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-1381335/.minikube/ca.crt ...
	I0816 12:25:57.773981 1387479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1381335/.minikube/ca.crt: {Name:mk0abc725d07af006b1bd80999d9cb74372c95a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:25:57.774187 1387479 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-1381335/.minikube/ca.key ...
	I0816 12:25:57.774202 1387479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1381335/.minikube/ca.key: {Name:mk244103f56694344cc7fa24fc8b304dd5ded8a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:25:57.774807 1387479 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-1381335/.minikube/proxy-client-ca.key
	I0816 12:25:58.658106 1387479 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-1381335/.minikube/proxy-client-ca.crt ...
	I0816 12:25:58.658141 1387479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1381335/.minikube/proxy-client-ca.crt: {Name:mk329aa97becc0d5b2bd470a4f80d695baf7cc9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:25:58.658336 1387479 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-1381335/.minikube/proxy-client-ca.key ...
	I0816 12:25:58.658349 1387479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1381335/.minikube/proxy-client-ca.key: {Name:mk0054d6804513c813fbc7c8345ac7f5a155ba89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:25:58.658830 1387479 certs.go:256] generating profile certs ...
	I0816 12:25:58.658898 1387479 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/client.key
	I0816 12:25:58.658916 1387479 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/client.crt with IP's: []
	I0816 12:25:58.873418 1387479 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/client.crt ...
	I0816 12:25:58.873451 1387479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/client.crt: {Name:mkf34b318a06ff1a691f707ba7f1efe691343c60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:25:58.874127 1387479 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/client.key ...
	I0816 12:25:58.874144 1387479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/client.key: {Name:mk360039aca615933913e2216c678df67c9fd603 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:25:58.874868 1387479 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/apiserver.key.585d6f6a
	I0816 12:25:58.874891 1387479 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/apiserver.crt.585d6f6a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0816 12:25:59.306258 1387479 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/apiserver.crt.585d6f6a ...
	I0816 12:25:59.306293 1387479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/apiserver.crt.585d6f6a: {Name:mk9714ef2bb629be7900e291b21c0af1c17e99df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:25:59.307020 1387479 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/apiserver.key.585d6f6a ...
	I0816 12:25:59.307040 1387479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/apiserver.key.585d6f6a: {Name:mk2054854d323378f4639c6fb7f0e7448b862005 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:25:59.307467 1387479 certs.go:381] copying /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/apiserver.crt.585d6f6a -> /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/apiserver.crt
	I0816 12:25:59.307565 1387479 certs.go:385] copying /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/apiserver.key.585d6f6a -> /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/apiserver.key
	I0816 12:25:59.307624 1387479 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/proxy-client.key
	I0816 12:25:59.307646 1387479 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/proxy-client.crt with IP's: []
	I0816 12:25:59.691435 1387479 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/proxy-client.crt ...
	I0816 12:25:59.691471 1387479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/proxy-client.crt: {Name:mk54d74acf5e459a95168204396bdfebf4a6453e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:25:59.692043 1387479 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/proxy-client.key ...
	I0816 12:25:59.692064 1387479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/proxy-client.key: {Name:mkcd3b978b0fa1d409c8422bf4b5e9571781fd00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:25:59.692771 1387479 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-1381335/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 12:25:59.692843 1387479 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-1381335/.minikube/certs/ca.pem (1078 bytes)
	I0816 12:25:59.692878 1387479 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-1381335/.minikube/certs/cert.pem (1123 bytes)
	I0816 12:25:59.692920 1387479 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-1381335/.minikube/certs/key.pem (1679 bytes)
	I0816 12:25:59.693547 1387479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1381335/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 12:25:59.719815 1387479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1381335/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 12:25:59.745920 1387479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1381335/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 12:25:59.770057 1387479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1381335/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 12:25:59.795508 1387479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0816 12:25:59.822319 1387479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 12:25:59.847247 1387479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 12:25:59.872299 1387479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 12:25:59.899440 1387479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1381335/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 12:25:59.925192 1387479 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 12:25:59.943836 1387479 ssh_runner.go:195] Run: openssl version
	I0816 12:25:59.949518 1387479 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 12:25:59.959204 1387479 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 12:25:59.962932 1387479 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 12:25 /usr/share/ca-certificates/minikubeCA.pem
	I0816 12:25:59.962997 1387479 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 12:25:59.970252 1387479 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 12:25:59.979696 1387479 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 12:25:59.983157 1387479 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0816 12:25:59.983207 1387479 kubeadm.go:392] StartCluster: {Name:addons-606349 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-606349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 12:25:59.983297 1387479 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 12:25:59.983363 1387479 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 12:26:00.115353 1387479 cri.go:89] found id: ""
	I0816 12:26:00.115446 1387479 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 12:26:00.175288 1387479 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 12:26:00.199801 1387479 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0816 12:26:00.199885 1387479 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 12:26:00.245956 1387479 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 12:26:00.245975 1387479 kubeadm.go:157] found existing configuration files:
	
	I0816 12:26:00.246048 1387479 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 12:26:00.278254 1387479 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 12:26:00.278326 1387479 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 12:26:00.312259 1387479 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 12:26:00.352452 1387479 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 12:26:00.352540 1387479 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 12:26:00.382237 1387479 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 12:26:00.413887 1387479 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 12:26:00.413959 1387479 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 12:26:00.425369 1387479 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 12:26:00.436805 1387479 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 12:26:00.436893 1387479 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 12:26:00.447577 1387479 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0816 12:26:00.495011 1387479 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0816 12:26:00.495462 1387479 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 12:26:00.534487 1387479 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0816 12:26:00.534609 1387479 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1067-aws
	I0816 12:26:00.534649 1387479 kubeadm.go:310] OS: Linux
	I0816 12:26:00.534698 1387479 kubeadm.go:310] CGROUPS_CPU: enabled
	I0816 12:26:00.534772 1387479 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0816 12:26:00.534825 1387479 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0816 12:26:00.534873 1387479 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0816 12:26:00.534924 1387479 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0816 12:26:00.534978 1387479 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0816 12:26:00.535031 1387479 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0816 12:26:00.535082 1387479 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0816 12:26:00.535132 1387479 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0816 12:26:00.609032 1387479 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 12:26:00.609143 1387479 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 12:26:00.609238 1387479 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0816 12:26:00.617368 1387479 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 12:26:00.620808 1387479 out.go:235]   - Generating certificates and keys ...
	I0816 12:26:00.620930 1387479 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 12:26:00.621044 1387479 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 12:26:02.014464 1387479 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0816 12:26:02.544117 1387479 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0816 12:26:02.970050 1387479 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0816 12:26:03.178531 1387479 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0816 12:26:03.475097 1387479 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0816 12:26:03.475384 1387479 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-606349 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0816 12:26:03.846624 1387479 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0816 12:26:03.846844 1387479 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-606349 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0816 12:26:04.314895 1387479 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0816 12:26:05.128042 1387479 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0816 12:26:05.859132 1387479 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0816 12:26:05.859611 1387479 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 12:26:06.911669 1387479 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 12:26:07.106803 1387479 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0816 12:26:07.803204 1387479 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 12:26:08.245881 1387479 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 12:26:08.647671 1387479 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 12:26:08.648387 1387479 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 12:26:08.651451 1387479 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 12:26:08.653660 1387479 out.go:235]   - Booting up control plane ...
	I0816 12:26:08.653774 1387479 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 12:26:08.653850 1387479 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 12:26:08.656007 1387479 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 12:26:08.666103 1387479 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 12:26:08.672626 1387479 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 12:26:08.672882 1387479 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 12:26:08.771618 1387479 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0816 12:26:08.771740 1387479 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0816 12:26:10.773496 1387479 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 2.001941203s
	I0816 12:26:10.773583 1387479 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0816 12:26:16.274804 1387479 kubeadm.go:310] [api-check] The API server is healthy after 5.501278399s
	I0816 12:26:16.299231 1387479 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0816 12:26:16.315289 1387479 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0816 12:26:16.343256 1387479 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0816 12:26:16.343451 1387479 kubeadm.go:310] [mark-control-plane] Marking the node addons-606349 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0816 12:26:16.354524 1387479 kubeadm.go:310] [bootstrap-token] Using token: 1vr55b.ts8mrotbuaenwvy3
	I0816 12:26:16.356352 1387479 out.go:235]   - Configuring RBAC rules ...
	I0816 12:26:16.356487 1387479 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0816 12:26:16.362891 1387479 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0816 12:26:16.371196 1387479 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0816 12:26:16.375049 1387479 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0816 12:26:16.378639 1387479 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0816 12:26:16.383291 1387479 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0816 12:26:16.684409 1387479 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0816 12:26:17.130758 1387479 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0816 12:26:17.684856 1387479 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0816 12:26:17.684938 1387479 kubeadm.go:310] 
	I0816 12:26:17.685018 1387479 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0816 12:26:17.685025 1387479 kubeadm.go:310] 
	I0816 12:26:17.685119 1387479 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0816 12:26:17.685125 1387479 kubeadm.go:310] 
	I0816 12:26:17.685162 1387479 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0816 12:26:17.685221 1387479 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0816 12:26:17.685280 1387479 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0816 12:26:17.685298 1387479 kubeadm.go:310] 
	I0816 12:26:17.685367 1387479 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0816 12:26:17.685381 1387479 kubeadm.go:310] 
	I0816 12:26:17.685444 1387479 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0816 12:26:17.685452 1387479 kubeadm.go:310] 
	I0816 12:26:17.685503 1387479 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0816 12:26:17.685592 1387479 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0816 12:26:17.685677 1387479 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0816 12:26:17.685693 1387479 kubeadm.go:310] 
	I0816 12:26:17.685827 1387479 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0816 12:26:17.685919 1387479 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0816 12:26:17.685934 1387479 kubeadm.go:310] 
	I0816 12:26:17.686022 1387479 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 1vr55b.ts8mrotbuaenwvy3 \
	I0816 12:26:17.686160 1387479 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f9e7c8c29c13fd1e89c944beb24d85c1145fec055b6164d87d49cd9cc484240a \
	I0816 12:26:17.686188 1387479 kubeadm.go:310] 	--control-plane 
	I0816 12:26:17.686196 1387479 kubeadm.go:310] 
	I0816 12:26:17.686298 1387479 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0816 12:26:17.686309 1387479 kubeadm.go:310] 
	I0816 12:26:17.686413 1387479 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 1vr55b.ts8mrotbuaenwvy3 \
	I0816 12:26:17.686554 1387479 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f9e7c8c29c13fd1e89c944beb24d85c1145fec055b6164d87d49cd9cc484240a 
	I0816 12:26:17.690142 1387479 kubeadm.go:310] W0816 12:26:00.489871    1177 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 12:26:17.690432 1387479 kubeadm.go:310] W0816 12:26:00.491748    1177 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 12:26:17.690644 1387479 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1067-aws\n", err: exit status 1
	I0816 12:26:17.690744 1387479 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 12:26:17.690769 1387479 cni.go:84] Creating CNI manager for ""
	I0816 12:26:17.690781 1387479 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0816 12:26:17.694115 1387479 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0816 12:26:17.695952 1387479 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0816 12:26:17.700275 1387479 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0816 12:26:17.700308 1387479 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0816 12:26:17.721894 1387479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0816 12:26:18.020373 1387479 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 12:26:18.020542 1387479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 12:26:18.020640 1387479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-606349 minikube.k8s.io/updated_at=2024_08_16T12_26_18_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ab84f9bc76071a77c857a14f5c66dccc01002b05 minikube.k8s.io/name=addons-606349 minikube.k8s.io/primary=true
	I0816 12:26:18.221540 1387479 ops.go:34] apiserver oom_adj: -16
	I0816 12:26:18.221638 1387479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 12:26:18.722438 1387479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 12:26:19.221726 1387479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 12:26:19.722083 1387479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 12:26:20.222043 1387479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 12:26:20.722263 1387479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 12:26:21.221829 1387479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 12:26:21.721795 1387479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 12:26:21.815534 1387479 kubeadm.go:1113] duration metric: took 3.795066848s to wait for elevateKubeSystemPrivileges
	I0816 12:26:21.815570 1387479 kubeadm.go:394] duration metric: took 21.832366761s to StartCluster
	I0816 12:26:21.815588 1387479 settings.go:142] acquiring lock: {Name:mk061dbb4361ece7e549334669d8986f48680b2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:26:21.815719 1387479 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-1381335/kubeconfig
	I0816 12:26:21.816191 1387479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1381335/kubeconfig: {Name:mk5d80d953866a4dbf0a0227ebebea809a97d7a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 12:26:21.816957 1387479 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 12:26:21.817096 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 12:26:21.817357 1387479 config.go:182] Loaded profile config "addons-606349": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 12:26:21.817396 1387479 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0816 12:26:21.817479 1387479 addons.go:69] Setting yakd=true in profile "addons-606349"
	I0816 12:26:21.817504 1387479 addons.go:234] Setting addon yakd=true in "addons-606349"
	I0816 12:26:21.817532 1387479 host.go:66] Checking if "addons-606349" exists ...
	I0816 12:26:21.818071 1387479 cli_runner.go:164] Run: docker container inspect addons-606349 --format={{.State.Status}}
	I0816 12:26:21.818303 1387479 addons.go:69] Setting inspektor-gadget=true in profile "addons-606349"
	I0816 12:26:21.818329 1387479 addons.go:234] Setting addon inspektor-gadget=true in "addons-606349"
	I0816 12:26:21.818353 1387479 host.go:66] Checking if "addons-606349" exists ...
	I0816 12:26:21.818751 1387479 cli_runner.go:164] Run: docker container inspect addons-606349 --format={{.State.Status}}
	I0816 12:26:21.819138 1387479 addons.go:69] Setting metrics-server=true in profile "addons-606349"
	I0816 12:26:21.819168 1387479 addons.go:234] Setting addon metrics-server=true in "addons-606349"
	I0816 12:26:21.819193 1387479 host.go:66] Checking if "addons-606349" exists ...
	I0816 12:26:21.819585 1387479 cli_runner.go:164] Run: docker container inspect addons-606349 --format={{.State.Status}}
	I0816 12:26:21.819896 1387479 addons.go:69] Setting cloud-spanner=true in profile "addons-606349"
	I0816 12:26:21.819931 1387479 addons.go:234] Setting addon cloud-spanner=true in "addons-606349"
	I0816 12:26:21.819968 1387479 host.go:66] Checking if "addons-606349" exists ...
	I0816 12:26:21.820372 1387479 cli_runner.go:164] Run: docker container inspect addons-606349 --format={{.State.Status}}
	I0816 12:26:21.820537 1387479 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-606349"
	I0816 12:26:21.820564 1387479 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-606349"
	I0816 12:26:21.820600 1387479 host.go:66] Checking if "addons-606349" exists ...
	I0816 12:26:21.820979 1387479 cli_runner.go:164] Run: docker container inspect addons-606349 --format={{.State.Status}}
	I0816 12:26:21.826362 1387479 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-606349"
	I0816 12:26:21.826444 1387479 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-606349"
	I0816 12:26:21.826482 1387479 host.go:66] Checking if "addons-606349" exists ...
	I0816 12:26:21.826947 1387479 cli_runner.go:164] Run: docker container inspect addons-606349 --format={{.State.Status}}
	I0816 12:26:21.841953 1387479 addons.go:69] Setting registry=true in profile "addons-606349"
	I0816 12:26:21.841999 1387479 addons.go:234] Setting addon registry=true in "addons-606349"
	I0816 12:26:21.842038 1387479 host.go:66] Checking if "addons-606349" exists ...
	I0816 12:26:21.842517 1387479 cli_runner.go:164] Run: docker container inspect addons-606349 --format={{.State.Status}}
	I0816 12:26:21.845976 1387479 addons.go:69] Setting default-storageclass=true in profile "addons-606349"
	I0816 12:26:21.846028 1387479 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-606349"
	I0816 12:26:21.846341 1387479 cli_runner.go:164] Run: docker container inspect addons-606349 --format={{.State.Status}}
	I0816 12:26:21.861270 1387479 addons.go:69] Setting storage-provisioner=true in profile "addons-606349"
	I0816 12:26:21.861316 1387479 addons.go:234] Setting addon storage-provisioner=true in "addons-606349"
	I0816 12:26:21.861354 1387479 host.go:66] Checking if "addons-606349" exists ...
	I0816 12:26:21.861855 1387479 cli_runner.go:164] Run: docker container inspect addons-606349 --format={{.State.Status}}
	I0816 12:26:21.862020 1387479 addons.go:69] Setting gcp-auth=true in profile "addons-606349"
	I0816 12:26:21.862049 1387479 mustload.go:65] Loading cluster: addons-606349
	I0816 12:26:21.862202 1387479 config.go:182] Loaded profile config "addons-606349": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 12:26:21.862416 1387479 cli_runner.go:164] Run: docker container inspect addons-606349 --format={{.State.Status}}
	I0816 12:26:21.881402 1387479 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-606349"
	I0816 12:26:21.881438 1387479 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-606349"
	I0816 12:26:21.881781 1387479 cli_runner.go:164] Run: docker container inspect addons-606349 --format={{.State.Status}}
	I0816 12:26:21.902129 1387479 addons.go:69] Setting ingress=true in profile "addons-606349"
	I0816 12:26:21.902176 1387479 addons.go:234] Setting addon ingress=true in "addons-606349"
	I0816 12:26:21.902224 1387479 host.go:66] Checking if "addons-606349" exists ...
	I0816 12:26:21.902724 1387479 cli_runner.go:164] Run: docker container inspect addons-606349 --format={{.State.Status}}
	I0816 12:26:21.902988 1387479 addons.go:69] Setting volcano=true in profile "addons-606349"
	I0816 12:26:21.903057 1387479 addons.go:234] Setting addon volcano=true in "addons-606349"
	I0816 12:26:21.903121 1387479 host.go:66] Checking if "addons-606349" exists ...
	I0816 12:26:21.910931 1387479 cli_runner.go:164] Run: docker container inspect addons-606349 --format={{.State.Status}}
	I0816 12:26:21.925344 1387479 addons.go:69] Setting ingress-dns=true in profile "addons-606349"
	I0816 12:26:21.925389 1387479 addons.go:234] Setting addon ingress-dns=true in "addons-606349"
	I0816 12:26:21.925436 1387479 host.go:66] Checking if "addons-606349" exists ...
	I0816 12:26:21.926005 1387479 cli_runner.go:164] Run: docker container inspect addons-606349 --format={{.State.Status}}
	I0816 12:26:21.930186 1387479 out.go:177] * Verifying Kubernetes components...
	I0816 12:26:21.936696 1387479 addons.go:69] Setting volumesnapshots=true in profile "addons-606349"
	I0816 12:26:21.936736 1387479 addons.go:234] Setting addon volumesnapshots=true in "addons-606349"
	I0816 12:26:21.936791 1387479 host.go:66] Checking if "addons-606349" exists ...
	I0816 12:26:21.937385 1387479 cli_runner.go:164] Run: docker container inspect addons-606349 --format={{.State.Status}}
	I0816 12:26:21.981496 1387479 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 12:26:21.985040 1387479 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0816 12:26:21.991719 1387479 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0816 12:26:21.991804 1387479 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 12:26:21.991815 1387479 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 12:26:21.991887 1387479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606349
	I0816 12:26:21.992483 1387479 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0816 12:26:21.995593 1387479 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0816 12:26:21.995622 1387479 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0816 12:26:21.995699 1387479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606349
	I0816 12:26:22.003893 1387479 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0816 12:26:22.004037 1387479 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0816 12:26:22.007477 1387479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606349
	I0816 12:26:22.030799 1387479 out.go:177]   - Using image docker.io/registry:2.8.3
	I0816 12:26:22.034095 1387479 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0816 12:26:22.034304 1387479 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0816 12:26:22.035931 1387479 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0816 12:26:22.035953 1387479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0816 12:26:22.036028 1387479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606349
	I0816 12:26:22.036295 1387479 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0816 12:26:22.036309 1387479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0816 12:26:22.036351 1387479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606349
	I0816 12:26:22.067930 1387479 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0816 12:26:22.069687 1387479 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0816 12:26:22.069711 1387479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0816 12:26:22.069802 1387479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606349
	I0816 12:26:22.079848 1387479 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0816 12:26:22.082040 1387479 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0816 12:26:22.083812 1387479 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0816 12:26:22.085628 1387479 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0816 12:26:22.087600 1387479 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0816 12:26:22.089605 1387479 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0816 12:26:22.091940 1387479 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0816 12:26:22.093792 1387479 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0816 12:26:22.096326 1387479 addons.go:234] Setting addon default-storageclass=true in "addons-606349"
	I0816 12:26:22.096373 1387479 host.go:66] Checking if "addons-606349" exists ...
	I0816 12:26:22.096836 1387479 cli_runner.go:164] Run: docker container inspect addons-606349 --format={{.State.Status}}
	I0816 12:26:22.103530 1387479 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0816 12:26:22.103578 1387479 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0816 12:26:22.103654 1387479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606349
	I0816 12:26:22.133357 1387479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34595 SSHKeyPath:/home/jenkins/minikube-integration/19423-1381335/.minikube/machines/addons-606349/id_rsa Username:docker}
	I0816 12:26:22.187267 1387479 host.go:66] Checking if "addons-606349" exists ...
	I0816 12:26:22.195638 1387479 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 12:26:22.195879 1387479 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0816 12:26:22.197472 1387479 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 12:26:22.197492 1387479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 12:26:22.197559 1387479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606349
	I0816 12:26:22.199477 1387479 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0816 12:26:22.201185 1387479 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0816 12:26:22.203509 1387479 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0816 12:26:22.203530 1387479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0816 12:26:22.203596 1387479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606349
	I0816 12:26:22.220317 1387479 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-606349"
	I0816 12:26:22.220363 1387479 host.go:66] Checking if "addons-606349" exists ...
	I0816 12:26:22.220823 1387479 cli_runner.go:164] Run: docker container inspect addons-606349 --format={{.State.Status}}
	W0816 12:26:22.221172 1387479 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0816 12:26:22.237432 1387479 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0816 12:26:22.239157 1387479 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0816 12:26:22.239185 1387479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0816 12:26:22.239261 1387479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606349
	I0816 12:26:22.250996 1387479 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0816 12:26:22.251305 1387479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34595 SSHKeyPath:/home/jenkins/minikube-integration/19423-1381335/.minikube/machines/addons-606349/id_rsa Username:docker}
	I0816 12:26:22.254652 1387479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34595 SSHKeyPath:/home/jenkins/minikube-integration/19423-1381335/.minikube/machines/addons-606349/id_rsa Username:docker}
	I0816 12:26:22.255441 1387479 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0816 12:26:22.255461 1387479 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0816 12:26:22.255536 1387479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606349
	I0816 12:26:22.269013 1387479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34595 SSHKeyPath:/home/jenkins/minikube-integration/19423-1381335/.minikube/machines/addons-606349/id_rsa Username:docker}
	I0816 12:26:22.285882 1387479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34595 SSHKeyPath:/home/jenkins/minikube-integration/19423-1381335/.minikube/machines/addons-606349/id_rsa Username:docker}
	I0816 12:26:22.342209 1387479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34595 SSHKeyPath:/home/jenkins/minikube-integration/19423-1381335/.minikube/machines/addons-606349/id_rsa Username:docker}
	I0816 12:26:22.351221 1387479 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 12:26:22.351242 1387479 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 12:26:22.351302 1387479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606349
	I0816 12:26:22.369269 1387479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34595 SSHKeyPath:/home/jenkins/minikube-integration/19423-1381335/.minikube/machines/addons-606349/id_rsa Username:docker}
	I0816 12:26:22.393893 1387479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34595 SSHKeyPath:/home/jenkins/minikube-integration/19423-1381335/.minikube/machines/addons-606349/id_rsa Username:docker}
	I0816 12:26:22.404561 1387479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34595 SSHKeyPath:/home/jenkins/minikube-integration/19423-1381335/.minikube/machines/addons-606349/id_rsa Username:docker}
	I0816 12:26:22.407605 1387479 out.go:177]   - Using image docker.io/busybox:stable
	I0816 12:26:22.411058 1387479 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0816 12:26:22.418294 1387479 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 12:26:22.418320 1387479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0816 12:26:22.419155 1387479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34595 SSHKeyPath:/home/jenkins/minikube-integration/19423-1381335/.minikube/machines/addons-606349/id_rsa Username:docker}
	I0816 12:26:22.419883 1387479 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0816 12:26:22.419905 1387479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0816 12:26:22.419964 1387479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606349
	I0816 12:26:22.420667 1387479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34595 SSHKeyPath:/home/jenkins/minikube-integration/19423-1381335/.minikube/machines/addons-606349/id_rsa Username:docker}
	I0816 12:26:22.441918 1387479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34595 SSHKeyPath:/home/jenkins/minikube-integration/19423-1381335/.minikube/machines/addons-606349/id_rsa Username:docker}
	W0816 12:26:22.443015 1387479 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0816 12:26:22.443043 1387479 retry.go:31] will retry after 253.190095ms: ssh: handshake failed: EOF
	I0816 12:26:22.455419 1387479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34595 SSHKeyPath:/home/jenkins/minikube-integration/19423-1381335/.minikube/machines/addons-606349/id_rsa Username:docker}
	I0816 12:26:22.553555 1387479 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 12:26:22.560042 1387479 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 12:26:22.685454 1387479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0816 12:26:22.701257 1387479 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0816 12:26:22.701318 1387479 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0816 12:26:22.703978 1387479 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 12:26:22.704041 1387479 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 12:26:22.715645 1387479 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0816 12:26:22.715711 1387479 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0816 12:26:22.751644 1387479 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0816 12:26:22.751708 1387479 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0816 12:26:22.756494 1387479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0816 12:26:22.762008 1387479 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0816 12:26:22.762072 1387479 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0816 12:26:22.783053 1387479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0816 12:26:22.794153 1387479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 12:26:22.796859 1387479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0816 12:26:22.808356 1387479 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0816 12:26:22.808430 1387479 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0816 12:26:22.827184 1387479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0816 12:26:22.871565 1387479 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0816 12:26:22.871629 1387479 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0816 12:26:22.874864 1387479 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0816 12:26:22.874925 1387479 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0816 12:26:22.875468 1387479 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0816 12:26:22.875509 1387479 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0816 12:26:22.900568 1387479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 12:26:22.907833 1387479 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0816 12:26:22.907898 1387479 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0816 12:26:22.957262 1387479 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0816 12:26:22.957326 1387479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0816 12:26:22.980125 1387479 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0816 12:26:22.980188 1387479 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0816 12:26:23.010188 1387479 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0816 12:26:23.010271 1387479 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0816 12:26:23.045261 1387479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 12:26:23.100440 1387479 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0816 12:26:23.100513 1387479 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0816 12:26:23.103595 1387479 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0816 12:26:23.103662 1387479 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0816 12:26:23.125222 1387479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0816 12:26:23.160647 1387479 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0816 12:26:23.160731 1387479 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0816 12:26:23.204177 1387479 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0816 12:26:23.204264 1387479 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0816 12:26:23.270022 1387479 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0816 12:26:23.270096 1387479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0816 12:26:23.299741 1387479 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0816 12:26:23.299808 1387479 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0816 12:26:23.318244 1387479 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0816 12:26:23.318318 1387479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0816 12:26:23.369932 1387479 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0816 12:26:23.370005 1387479 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0816 12:26:23.430629 1387479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0816 12:26:23.447156 1387479 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0816 12:26:23.447230 1387479 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0816 12:26:23.455221 1387479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0816 12:26:23.474905 1387479 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0816 12:26:23.474988 1387479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0816 12:26:23.541306 1387479 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.724178461s)
	I0816 12:26:23.541407 1387479 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.559833859s)
	I0816 12:26:23.541574 1387479 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 12:26:23.541734 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0816 12:26:23.556141 1387479 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0816 12:26:23.556223 1387479 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0816 12:26:23.570749 1387479 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0816 12:26:23.570819 1387479 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0816 12:26:23.656933 1387479 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0816 12:26:23.657007 1387479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0816 12:26:23.663739 1387479 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0816 12:26:23.663804 1387479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0816 12:26:23.742251 1387479 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0816 12:26:23.742323 1387479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0816 12:26:23.760399 1387479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0816 12:26:23.860977 1387479 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0816 12:26:23.861046 1387479 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0816 12:26:23.981909 1387479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0816 12:26:26.382179 1387479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.696649581s)
	I0816 12:26:26.382239 1387479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.625675035s)
	I0816 12:26:28.818938 1387479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.035802448s)
	I0816 12:26:28.818974 1387479 addons.go:475] Verifying addon ingress=true in "addons-606349"
	I0816 12:26:28.819177 1387479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.024951562s)
	I0816 12:26:28.819257 1387479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.022313182s)
	I0816 12:26:28.819303 1387479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.992098672s)
	I0816 12:26:28.819541 1387479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.918900784s)
	I0816 12:26:28.819563 1387479 addons.go:475] Verifying addon metrics-server=true in "addons-606349"
	I0816 12:26:28.819591 1387479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.774255927s)
	I0816 12:26:28.819755 1387479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.694435854s)
	I0816 12:26:28.819769 1387479 addons.go:475] Verifying addon registry=true in "addons-606349"
	I0816 12:26:28.819871 1387479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.389163723s)
	I0816 12:26:28.822639 1387479 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-606349 service yakd-dashboard -n yakd-dashboard
	
	I0816 12:26:28.822747 1387479 out.go:177] * Verifying registry addon...
	I0816 12:26:28.822789 1387479 out.go:177] * Verifying ingress addon...
	I0816 12:26:28.826527 1387479 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0816 12:26:28.827515 1387479 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	W0816 12:26:28.852243 1387479 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0816 12:26:28.877939 1387479 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0816 12:26:28.878011 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:28.879079 1387479 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0816 12:26:28.879144 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:28.880213 1387479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.424894137s)
	W0816 12:26:28.880281 1387479 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0816 12:26:28.880313 1387479 retry.go:31] will retry after 181.794351ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0816 12:26:28.880372 1387479 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (5.338587386s)
	I0816 12:26:28.880404 1387479 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0816 12:26:28.880441 1387479 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.338854937s)
	I0816 12:26:28.881583 1387479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.121088195s)
	I0816 12:26:28.882672 1387479 node_ready.go:35] waiting up to 6m0s for node "addons-606349" to be "Ready" ...
	I0816 12:26:29.062958 1387479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0816 12:26:29.373380 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:29.379712 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:29.433197 1387479 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-606349" context rescaled to 1 replicas
	I0816 12:26:29.649411 1387479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.667401759s)
	I0816 12:26:29.649489 1387479 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-606349"
	I0816 12:26:29.652875 1387479 out.go:177] * Verifying csi-hostpath-driver addon...
	I0816 12:26:29.657486 1387479 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0816 12:26:29.670040 1387479 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0816 12:26:29.670069 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:29.832774 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:29.833833 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:30.163903 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:30.333831 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:30.336447 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:30.662624 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:30.833251 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:30.834589 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:30.888060 1387479 node_ready.go:53] node "addons-606349" has status "Ready":"False"
	I0816 12:26:31.164926 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:31.334601 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:31.335807 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:31.662527 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:31.843978 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:31.844729 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:32.173875 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:32.302524 1387479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.239476235s)
	I0816 12:26:32.333700 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:32.334233 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:32.662637 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:32.768571 1387479 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0816 12:26:32.768657 1387479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606349
	I0816 12:26:32.784631 1387479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34595 SSHKeyPath:/home/jenkins/minikube-integration/19423-1381335/.minikube/machines/addons-606349/id_rsa Username:docker}
	I0816 12:26:32.837343 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:32.838172 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:32.925508 1387479 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0816 12:26:32.945156 1387479 addons.go:234] Setting addon gcp-auth=true in "addons-606349"
	I0816 12:26:32.945252 1387479 host.go:66] Checking if "addons-606349" exists ...
	I0816 12:26:32.945771 1387479 cli_runner.go:164] Run: docker container inspect addons-606349 --format={{.State.Status}}
	I0816 12:26:32.961850 1387479 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0816 12:26:32.961910 1387479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606349
	I0816 12:26:32.978448 1387479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34595 SSHKeyPath:/home/jenkins/minikube-integration/19423-1381335/.minikube/machines/addons-606349/id_rsa Username:docker}
	I0816 12:26:33.103965 1387479 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0816 12:26:33.105911 1387479 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0816 12:26:33.107630 1387479 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0816 12:26:33.107648 1387479 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0816 12:26:33.127810 1387479 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0816 12:26:33.127838 1387479 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0816 12:26:33.149337 1387479 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0816 12:26:33.149359 1387479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0816 12:26:33.174616 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:33.183957 1387479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0816 12:26:33.335976 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:33.337158 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:33.391524 1387479 node_ready.go:53] node "addons-606349" has status "Ready":"False"
	I0816 12:26:33.663681 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:33.782154 1387479 addons.go:475] Verifying addon gcp-auth=true in "addons-606349"
	I0816 12:26:33.784300 1387479 out.go:177] * Verifying gcp-auth addon...
	I0816 12:26:33.787215 1387479 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0816 12:26:33.796312 1387479 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0816 12:26:33.796381 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:33.831396 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:33.831717 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:34.161973 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:34.290862 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:34.330172 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:34.331474 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:34.662775 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:34.792116 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:34.833200 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:34.834097 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:35.161064 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:35.297022 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:35.330754 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:35.332369 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:35.662268 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:35.790807 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:35.831089 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:35.831532 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:35.885805 1387479 node_ready.go:53] node "addons-606349" has status "Ready":"False"
	I0816 12:26:36.161565 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:36.291853 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:36.329974 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:36.331595 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:36.661189 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:36.790263 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:36.830836 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:36.831681 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:37.162266 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:37.290613 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:37.330612 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:37.331381 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:37.661652 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:37.791376 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:37.830883 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:37.831729 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:37.885997 1387479 node_ready.go:53] node "addons-606349" has status "Ready":"False"
	I0816 12:26:38.161257 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:38.290912 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:38.330175 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:38.331144 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:38.660938 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:38.790984 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:38.830082 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:38.831486 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:39.161525 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:39.290615 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:39.330752 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:39.331620 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:39.660891 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:39.790951 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:39.830289 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:39.831681 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:40.161733 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:40.291146 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:40.331370 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:40.331587 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:40.387022 1387479 node_ready.go:53] node "addons-606349" has status "Ready":"False"
	I0816 12:26:40.661505 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:40.791014 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:40.830403 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:40.832250 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:41.161354 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:41.290675 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:41.331021 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:41.332029 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:41.660857 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:41.791222 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:41.830272 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:41.832936 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:42.162129 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:42.291598 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:42.331769 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:42.332160 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:42.662238 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:42.791199 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:42.830977 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:42.831739 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:42.885952 1387479 node_ready.go:53] node "addons-606349" has status "Ready":"False"
	I0816 12:26:43.161207 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:43.291070 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:43.329910 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:43.330956 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:43.661570 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:43.790654 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:43.831153 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:43.831984 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:44.162107 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:44.291090 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:44.331056 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:44.331884 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:44.661825 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:44.791355 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:44.831463 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:44.831876 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:44.886675 1387479 node_ready.go:53] node "addons-606349" has status "Ready":"False"
	I0816 12:26:45.167473 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:45.291642 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:45.330849 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:45.333274 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:45.661130 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:45.790725 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:45.831280 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:45.832041 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:46.160998 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:46.291132 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:46.331402 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:46.332231 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:46.660799 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:46.791575 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:46.834459 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:46.835600 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:47.161661 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:47.290920 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:47.329630 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:47.331455 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:47.386729 1387479 node_ready.go:53] node "addons-606349" has status "Ready":"False"
	I0816 12:26:47.660709 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:47.791454 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:47.830188 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:47.831909 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:48.161429 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:48.290796 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:48.329809 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:48.332068 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:48.662769 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:48.790591 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:48.830779 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:48.831547 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:49.160899 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:49.291301 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:49.330641 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:49.332345 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:49.660642 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:49.790816 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:49.830063 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:49.831119 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:49.886414 1387479 node_ready.go:53] node "addons-606349" has status "Ready":"False"
	I0816 12:26:50.161580 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:50.291013 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:50.331238 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:50.332175 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:50.661217 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:50.790874 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:50.829964 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:50.831754 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:51.161870 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:51.290680 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:51.330431 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:51.331532 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:51.661356 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:51.790491 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:51.829707 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:51.830964 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:52.161617 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:52.291120 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:52.329843 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:52.331617 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:52.385657 1387479 node_ready.go:53] node "addons-606349" has status "Ready":"False"
	I0816 12:26:52.661217 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:52.790918 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:52.829715 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:52.831464 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:53.162564 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:53.290957 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:53.330408 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:53.342723 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:53.662030 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:53.791334 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:53.832182 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:53.834268 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:54.161945 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:54.291790 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:54.332338 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:54.332714 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:54.386089 1387479 node_ready.go:53] node "addons-606349" has status "Ready":"False"
	I0816 12:26:54.661870 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:54.791236 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:54.831350 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:54.831813 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:55.161232 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:55.291047 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:55.331908 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:55.332209 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:55.661015 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:55.790632 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:55.830865 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:55.831734 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:56.161548 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:56.290776 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:56.330585 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:56.331708 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:56.386640 1387479 node_ready.go:53] node "addons-606349" has status "Ready":"False"
	I0816 12:26:56.662363 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:56.792288 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:56.830618 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:56.832299 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:57.162459 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:57.291226 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:57.331424 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:57.331912 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:57.661495 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:57.790529 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:57.830202 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:57.831510 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:58.161454 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:58.291470 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:58.331172 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:58.331665 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:58.662150 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:58.790295 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:58.831899 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:58.832766 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:58.886424 1387479 node_ready.go:53] node "addons-606349" has status "Ready":"False"
	I0816 12:26:59.161257 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:59.290731 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:59.330597 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:59.332058 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:26:59.661834 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:26:59.791622 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:26:59.831186 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:26:59.831941 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:00.166695 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:00.297315 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:00.362271 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:00.362597 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:00.661340 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:00.790451 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:00.830404 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:00.831636 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:01.161087 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:01.290562 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:01.330792 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:01.331478 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:01.386104 1387479 node_ready.go:53] node "addons-606349" has status "Ready":"False"
	I0816 12:27:01.661508 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:01.791023 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:01.830413 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:01.831850 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:02.161691 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:02.291407 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:02.331212 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:02.332442 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:02.661364 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:02.791166 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:02.830820 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:02.831800 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:03.161826 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:03.290671 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:03.331735 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:03.332194 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:03.386471 1387479 node_ready.go:53] node "addons-606349" has status "Ready":"False"
	I0816 12:27:03.661681 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:03.791375 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:03.830671 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:03.832438 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:04.161130 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:04.290769 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:04.329944 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:04.333424 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:04.662148 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:04.790628 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:04.831305 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:04.832088 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:05.161361 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:05.290652 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:05.329797 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:05.331197 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:05.387530 1387479 node_ready.go:53] node "addons-606349" has status "Ready":"False"
	I0816 12:27:05.660931 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:05.791236 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:05.831085 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:05.831582 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:06.161463 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:06.290328 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:06.330945 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:06.332321 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:06.661982 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:06.791339 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:06.830705 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:06.831691 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:07.161182 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:07.290682 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:07.331007 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:07.331874 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:07.661192 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:07.790850 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:07.830832 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:07.831748 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:07.885981 1387479 node_ready.go:53] node "addons-606349" has status "Ready":"False"
	I0816 12:27:08.160979 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:08.291884 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:08.331100 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:08.331911 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:08.674533 1387479 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0816 12:27:08.674561 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:08.792883 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:08.908809 1387479 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0816 12:27:08.908835 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:08.923083 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:08.932328 1387479 node_ready.go:49] node "addons-606349" has status "Ready":"True"
	I0816 12:27:08.932356 1387479 node_ready.go:38] duration metric: took 40.049634146s for node "addons-606349" to be "Ready" ...
	I0816 12:27:08.932368 1387479 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 12:27:08.972961 1387479 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-8ctjp" in "kube-system" namespace to be "Ready" ...
	I0816 12:27:09.171999 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:09.302777 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:09.402285 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:09.404814 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:09.662675 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:09.791064 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:09.834304 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:09.835333 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:10.163620 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:10.290748 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:10.334200 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:10.375978 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:10.484834 1387479 pod_ready.go:93] pod "coredns-6f6b679f8f-8ctjp" in "kube-system" namespace has status "Ready":"True"
	I0816 12:27:10.484870 1387479 pod_ready.go:82] duration metric: took 1.511872407s for pod "coredns-6f6b679f8f-8ctjp" in "kube-system" namespace to be "Ready" ...
	I0816 12:27:10.484921 1387479 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-606349" in "kube-system" namespace to be "Ready" ...
	I0816 12:27:10.493243 1387479 pod_ready.go:93] pod "etcd-addons-606349" in "kube-system" namespace has status "Ready":"True"
	I0816 12:27:10.493266 1387479 pod_ready.go:82] duration metric: took 8.33299ms for pod "etcd-addons-606349" in "kube-system" namespace to be "Ready" ...
	I0816 12:27:10.493307 1387479 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-606349" in "kube-system" namespace to be "Ready" ...
	I0816 12:27:10.505161 1387479 pod_ready.go:93] pod "kube-apiserver-addons-606349" in "kube-system" namespace has status "Ready":"True"
	I0816 12:27:10.505201 1387479 pod_ready.go:82] duration metric: took 11.879941ms for pod "kube-apiserver-addons-606349" in "kube-system" namespace to be "Ready" ...
	I0816 12:27:10.505233 1387479 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-606349" in "kube-system" namespace to be "Ready" ...
	I0816 12:27:10.513574 1387479 pod_ready.go:93] pod "kube-controller-manager-addons-606349" in "kube-system" namespace has status "Ready":"True"
	I0816 12:27:10.513609 1387479 pod_ready.go:82] duration metric: took 8.361494ms for pod "kube-controller-manager-addons-606349" in "kube-system" namespace to be "Ready" ...
	I0816 12:27:10.513624 1387479 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vjdhm" in "kube-system" namespace to be "Ready" ...
	I0816 12:27:10.519990 1387479 pod_ready.go:93] pod "kube-proxy-vjdhm" in "kube-system" namespace has status "Ready":"True"
	I0816 12:27:10.520017 1387479 pod_ready.go:82] duration metric: took 6.385977ms for pod "kube-proxy-vjdhm" in "kube-system" namespace to be "Ready" ...
	I0816 12:27:10.520029 1387479 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-606349" in "kube-system" namespace to be "Ready" ...
	I0816 12:27:10.662464 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:10.791966 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:10.830527 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:10.832645 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:10.886808 1387479 pod_ready.go:93] pod "kube-scheduler-addons-606349" in "kube-system" namespace has status "Ready":"True"
	I0816 12:27:10.886833 1387479 pod_ready.go:82] duration metric: took 366.796151ms for pod "kube-scheduler-addons-606349" in "kube-system" namespace to be "Ready" ...
	I0816 12:27:10.886846 1387479 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace to be "Ready" ...
	I0816 12:27:11.163057 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:11.291877 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:11.342978 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:11.346028 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:11.676747 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:11.792035 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:11.834844 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:11.836150 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:12.170270 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:12.291905 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:12.333098 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:12.337145 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:12.663705 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:12.791617 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:12.835498 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:12.849462 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:12.894541 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:27:13.162949 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:13.290878 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:13.330411 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:13.333567 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:13.663875 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:13.791474 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:13.831482 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:13.834626 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:14.163689 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:14.291366 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:14.331546 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:14.333297 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:14.663551 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:14.791114 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:14.831914 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:14.832494 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:15.162346 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:15.291222 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:15.331065 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:15.332665 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:15.399927 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:27:15.665574 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:15.792151 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:15.833875 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:15.835122 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:16.162587 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:16.293370 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:16.332857 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:16.335730 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:16.665314 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:16.791017 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:16.831353 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:16.840973 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:17.163102 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:17.291276 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:17.331310 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:17.332384 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:17.662873 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:17.790830 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:17.831771 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:17.832820 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:17.893626 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:27:18.163118 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:18.291894 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:18.354987 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:18.364856 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:18.663333 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:18.792309 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:18.832721 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:18.833350 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:19.164355 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:19.291307 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:19.331760 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:19.332489 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:19.662873 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:19.793474 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:19.894133 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:19.895208 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:19.895674 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:27:20.163079 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:20.291463 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:20.331525 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:20.335731 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:20.664027 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:20.791629 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:20.838935 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:20.843745 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:21.162804 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:21.291540 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:21.334327 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:21.335686 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:21.663438 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:21.790861 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:21.831004 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:21.833789 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:22.163110 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:22.290917 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:22.333064 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:22.334158 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:22.393345 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:27:22.663523 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:22.791662 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:22.834882 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:22.835334 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:23.163558 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:23.291706 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:23.333678 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:23.334264 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:23.663374 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:23.791360 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:23.833703 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:23.835081 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:24.162963 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:24.292142 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:24.396168 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:24.397063 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:24.400095 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:27:24.664172 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:24.791502 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:24.834325 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:24.835358 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:25.164710 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:25.291386 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:25.333890 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:25.335431 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:25.664244 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:25.792580 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:25.840222 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:25.842687 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:26.162828 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:26.293165 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:26.332312 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:26.333139 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:26.663371 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:26.790825 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:26.830778 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:26.832824 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:26.893107 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:27:27.163667 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:27.290761 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:27.330738 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:27.332555 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:27.664925 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:27.799084 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:27.908163 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:27.909526 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:28.163916 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:28.294112 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:28.342370 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:28.343491 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:28.676350 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:28.791161 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:28.834335 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:28.835670 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:28.902150 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:27:29.163338 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:29.295824 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:29.335203 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:29.337135 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:29.664836 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:29.790706 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:29.834162 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:29.835845 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:30.167147 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:30.291758 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:30.394345 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:30.394838 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:30.662828 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:30.793735 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:30.895209 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:30.896134 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:31.163214 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:31.291638 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:31.331763 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:31.332807 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:31.393223 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:27:31.662393 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:31.791512 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:31.831714 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:31.833045 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:32.163040 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:32.291789 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:32.333863 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:32.336619 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:32.663154 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:32.792352 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:32.834620 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:32.837092 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:33.168491 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:33.291962 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:33.332708 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:33.332835 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:33.395207 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:27:33.664843 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:33.792171 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:33.832339 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:33.833159 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:34.162607 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:34.290749 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:34.332811 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:34.338106 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:34.664099 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:34.791495 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:34.846028 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:34.847794 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:35.165250 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:35.293074 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:35.333082 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:35.335768 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:35.396791 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:27:35.664545 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:35.791316 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:35.840041 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:35.843905 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:36.162703 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:36.291050 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:36.332168 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:36.333491 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:36.663627 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:36.795874 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:36.896019 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:36.897267 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:37.168152 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:37.291972 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:37.333586 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:37.336521 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:37.398944 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:27:37.663435 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:37.793289 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:37.831618 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:37.832532 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:38.165215 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:38.291497 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:38.335820 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:38.337592 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:38.667587 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:38.791094 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:38.832809 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:38.833349 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:39.166099 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:39.290975 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:39.331841 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:39.333828 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:39.662383 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:39.792455 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:39.831377 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:39.833009 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:39.893221 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:27:40.162550 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:40.290806 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:40.334605 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:40.337582 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:40.665324 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:40.791664 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:40.830794 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:40.834925 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:41.162839 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:41.291321 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:41.331463 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:41.332749 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:41.666403 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:41.792132 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:41.894208 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:27:41.894857 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:41.895675 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:42.163412 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:42.294946 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:42.333446 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:42.334456 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:42.662156 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:42.791349 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:42.833376 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:42.839777 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:43.164191 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:43.291621 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:43.333104 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:43.334214 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:43.665554 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:43.791377 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:43.832759 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:43.833623 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:43.918869 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:27:44.162762 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:44.291468 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:44.332917 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:44.333475 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:44.662676 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:44.791384 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:44.831860 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:44.832505 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:45.180045 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:45.291030 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:45.335475 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:45.339232 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:45.662593 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:45.791406 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:45.833587 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:45.834598 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:46.164512 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:46.291572 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:46.334750 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:46.335578 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:46.394609 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:27:46.664333 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:46.792287 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:46.834624 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:46.839353 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:47.166110 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:47.290618 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:47.330646 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:47.340649 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:47.664844 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:47.791607 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:47.831705 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:47.835234 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:48.165004 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:48.299325 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:48.333187 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:48.335150 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:48.663613 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:48.791706 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:48.837255 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:48.838339 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:48.896101 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:27:49.163437 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:49.292003 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:49.330577 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:49.340604 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:49.663036 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:49.791578 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:49.833962 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:49.835134 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:50.164308 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:50.291611 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:50.332851 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:50.334751 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:50.671899 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:50.792421 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:50.832530 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:50.835208 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:51.163688 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:51.292044 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:51.350401 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:51.353209 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:51.395395 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:27:51.665568 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:51.791922 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:51.838807 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:51.841412 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:52.163610 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:52.290874 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:52.332349 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:52.332923 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:52.663642 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:52.791877 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:52.831708 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:52.832341 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:53.163156 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:53.290470 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:53.330196 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:53.332717 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:53.663260 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:53.791242 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:53.830740 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:53.832471 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:53.893871 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:27:54.163024 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:54.291082 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:54.331441 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:54.333265 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:54.665739 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:54.792131 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:54.832160 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:54.833023 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:55.163260 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:55.291562 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:55.330036 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:55.333169 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:55.663100 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:55.793896 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:55.895014 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:27:55.895404 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:55.898473 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:56.163177 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:56.291558 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:56.345290 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:56.351275 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:56.663753 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:56.795794 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:56.830950 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:56.832855 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:57.181715 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:57.291972 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:57.330623 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:57.335219 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:57.664871 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:57.791931 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:57.836942 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 12:27:57.838330 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:57.897578 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:27:58.165068 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:58.291460 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:58.329952 1387479 kapi.go:107] duration metric: took 1m29.503417052s to wait for kubernetes.io/minikube-addons=registry ...
	I0816 12:27:58.332164 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:58.662066 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:58.790557 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:58.833275 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:59.162850 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:59.292142 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:59.334790 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:27:59.662947 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:27:59.791681 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:27:59.834662 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:28:00.224471 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:00.313741 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:28:00.355616 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:28:00.400909 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:28:00.662401 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:00.791665 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:28:00.832636 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:28:01.163356 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:01.291028 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:28:01.333582 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:28:01.674205 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:01.792172 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:28:01.836367 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:28:02.173076 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:02.292569 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:28:02.332588 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:28:02.662632 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:02.792513 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 12:28:02.839000 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:28:02.901650 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:28:03.163275 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:03.298696 1387479 kapi.go:107] duration metric: took 1m29.511469681s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0816 12:28:03.300917 1387479 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-606349 cluster.
	I0816 12:28:03.302851 1387479 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0816 12:28:03.305103 1387479 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0816 12:28:03.396773 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:28:03.662893 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:03.832334 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:28:04.162546 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:04.331735 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:28:04.662656 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:04.832684 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:28:05.163603 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:05.333919 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:28:05.392917 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:28:05.663185 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:05.833486 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:28:06.163316 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:06.332170 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:28:06.663710 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:06.833005 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:28:07.163146 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:07.332753 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:28:07.663365 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:07.832697 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:28:07.893815 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:28:08.163243 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:08.332961 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:28:08.663201 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:08.832067 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:28:09.162683 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:09.332110 1387479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 12:28:09.662411 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:09.833544 1387479 kapi.go:107] duration metric: took 1m41.006012797s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0816 12:28:09.896115 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:28:10.163072 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:10.663877 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:11.168649 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:11.662800 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:12.163143 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:12.397889 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:28:12.663865 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:13.162656 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:13.664106 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:14.163514 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:14.662038 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:14.892831 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:28:15.162952 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:15.670824 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:16.163004 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:16.662693 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:16.893698 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:28:17.163504 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:17.663005 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:18.165035 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:18.662737 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:19.163715 1387479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 12:28:19.393694 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:28:19.664049 1387479 kapi.go:107] duration metric: took 1m50.006557983s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0816 12:28:19.666033 1387479 out.go:177] * Enabled addons: cloud-spanner, nvidia-device-plugin, storage-provisioner, ingress-dns, metrics-server, yakd, default-storageclass, inspektor-gadget, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I0816 12:28:19.668150 1387479 addons.go:510] duration metric: took 1m57.850743784s for enable addons: enabled=[cloud-spanner nvidia-device-plugin storage-provisioner ingress-dns metrics-server yakd default-storageclass inspektor-gadget volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I0816 12:28:21.393804 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:28:23.893090 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:28:25.894940 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:28:28.393408 1387479 pod_ready.go:103] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"False"
	I0816 12:28:29.393687 1387479 pod_ready.go:93] pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace has status "Ready":"True"
	I0816 12:28:29.393711 1387479 pod_ready.go:82] duration metric: took 1m18.506857719s for pod "metrics-server-8988944d9-lfhc7" in "kube-system" namespace to be "Ready" ...
	I0816 12:28:29.393724 1387479 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-tlscx" in "kube-system" namespace to be "Ready" ...
	I0816 12:28:29.399209 1387479 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-tlscx" in "kube-system" namespace has status "Ready":"True"
	I0816 12:28:29.399233 1387479 pod_ready.go:82] duration metric: took 5.500175ms for pod "nvidia-device-plugin-daemonset-tlscx" in "kube-system" namespace to be "Ready" ...
	I0816 12:28:29.399257 1387479 pod_ready.go:39] duration metric: took 1m20.46687626s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 12:28:29.399276 1387479 api_server.go:52] waiting for apiserver process to appear ...
	I0816 12:28:29.399308 1387479 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 12:28:29.399375 1387479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 12:28:29.457422 1387479 cri.go:89] found id: "8254d00c3ba90768ddb8796f419aa64db2c8219f603c6ab2a9315a0148773d80"
	I0816 12:28:29.457446 1387479 cri.go:89] found id: ""
	I0816 12:28:29.457453 1387479 logs.go:276] 1 containers: [8254d00c3ba90768ddb8796f419aa64db2c8219f603c6ab2a9315a0148773d80]
	I0816 12:28:29.457511 1387479 ssh_runner.go:195] Run: which crictl
	I0816 12:28:29.461058 1387479 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 12:28:29.461136 1387479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 12:28:29.502263 1387479 cri.go:89] found id: "20d8a65b34a9095a0855b56977c5799d94cf162bf5fc1ec0c7451fd646c0c928"
	I0816 12:28:29.502327 1387479 cri.go:89] found id: ""
	I0816 12:28:29.502341 1387479 logs.go:276] 1 containers: [20d8a65b34a9095a0855b56977c5799d94cf162bf5fc1ec0c7451fd646c0c928]
	I0816 12:28:29.502395 1387479 ssh_runner.go:195] Run: which crictl
	I0816 12:28:29.505822 1387479 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 12:28:29.505894 1387479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 12:28:29.545081 1387479 cri.go:89] found id: "bbdc93411ee89a2417b6b1e8a74f87b93757d61d1ff049726d814f588c7bd2e8"
	I0816 12:28:29.545106 1387479 cri.go:89] found id: ""
	I0816 12:28:29.545115 1387479 logs.go:276] 1 containers: [bbdc93411ee89a2417b6b1e8a74f87b93757d61d1ff049726d814f588c7bd2e8]
	I0816 12:28:29.545171 1387479 ssh_runner.go:195] Run: which crictl
	I0816 12:28:29.549095 1387479 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 12:28:29.549177 1387479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 12:28:29.597604 1387479 cri.go:89] found id: "5b36378235e83568da22d79bab9fd542bd2d8b811c3ac53446fdd2c0f8439b0a"
	I0816 12:28:29.597637 1387479 cri.go:89] found id: ""
	I0816 12:28:29.597645 1387479 logs.go:276] 1 containers: [5b36378235e83568da22d79bab9fd542bd2d8b811c3ac53446fdd2c0f8439b0a]
	I0816 12:28:29.597711 1387479 ssh_runner.go:195] Run: which crictl
	I0816 12:28:29.601306 1387479 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 12:28:29.601396 1387479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 12:28:29.642050 1387479 cri.go:89] found id: "c0d8bb8efc5a6717ce3e0b4ffce376f1bfe1b809957f70faf484d2798e7fa960"
	I0816 12:28:29.642073 1387479 cri.go:89] found id: ""
	I0816 12:28:29.642082 1387479 logs.go:276] 1 containers: [c0d8bb8efc5a6717ce3e0b4ffce376f1bfe1b809957f70faf484d2798e7fa960]
	I0816 12:28:29.642136 1387479 ssh_runner.go:195] Run: which crictl
	I0816 12:28:29.645654 1387479 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 12:28:29.645737 1387479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 12:28:29.684767 1387479 cri.go:89] found id: "5b54e04f88c26416f2b2d8d3d73b7067bd8d2dafbbfd236b383241449b084690"
	I0816 12:28:29.684835 1387479 cri.go:89] found id: ""
	I0816 12:28:29.684856 1387479 logs.go:276] 1 containers: [5b54e04f88c26416f2b2d8d3d73b7067bd8d2dafbbfd236b383241449b084690]
	I0816 12:28:29.684947 1387479 ssh_runner.go:195] Run: which crictl
	I0816 12:28:29.688303 1387479 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 12:28:29.688437 1387479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 12:28:29.735108 1387479 cri.go:89] found id: "e9086ec7c665816871c69a6b3c6e1521a95a6c3104276a412228ea68a619e2e4"
	I0816 12:28:29.735232 1387479 cri.go:89] found id: ""
	I0816 12:28:29.735255 1387479 logs.go:276] 1 containers: [e9086ec7c665816871c69a6b3c6e1521a95a6c3104276a412228ea68a619e2e4]
	I0816 12:28:29.735378 1387479 ssh_runner.go:195] Run: which crictl
	I0816 12:28:29.739400 1387479 logs.go:123] Gathering logs for kubelet ...
	I0816 12:28:29.739425 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 12:28:29.844499 1387479 logs.go:123] Gathering logs for dmesg ...
	I0816 12:28:29.844544 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 12:28:29.863666 1387479 logs.go:123] Gathering logs for kube-apiserver [8254d00c3ba90768ddb8796f419aa64db2c8219f603c6ab2a9315a0148773d80] ...
	I0816 12:28:29.863698 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8254d00c3ba90768ddb8796f419aa64db2c8219f603c6ab2a9315a0148773d80"
	I0816 12:28:29.920864 1387479 logs.go:123] Gathering logs for coredns [bbdc93411ee89a2417b6b1e8a74f87b93757d61d1ff049726d814f588c7bd2e8] ...
	I0816 12:28:29.920899 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bbdc93411ee89a2417b6b1e8a74f87b93757d61d1ff049726d814f588c7bd2e8"
	I0816 12:28:29.966050 1387479 logs.go:123] Gathering logs for kube-scheduler [5b36378235e83568da22d79bab9fd542bd2d8b811c3ac53446fdd2c0f8439b0a] ...
	I0816 12:28:29.966082 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b36378235e83568da22d79bab9fd542bd2d8b811c3ac53446fdd2c0f8439b0a"
	I0816 12:28:30.062407 1387479 logs.go:123] Gathering logs for CRI-O ...
	I0816 12:28:30.062449 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 12:28:30.162425 1387479 logs.go:123] Gathering logs for describe nodes ...
	I0816 12:28:30.162471 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 12:28:30.373504 1387479 logs.go:123] Gathering logs for etcd [20d8a65b34a9095a0855b56977c5799d94cf162bf5fc1ec0c7451fd646c0c928] ...
	I0816 12:28:30.373554 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20d8a65b34a9095a0855b56977c5799d94cf162bf5fc1ec0c7451fd646c0c928"
	I0816 12:28:30.427062 1387479 logs.go:123] Gathering logs for kube-proxy [c0d8bb8efc5a6717ce3e0b4ffce376f1bfe1b809957f70faf484d2798e7fa960] ...
	I0816 12:28:30.427101 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0d8bb8efc5a6717ce3e0b4ffce376f1bfe1b809957f70faf484d2798e7fa960"
	I0816 12:28:30.467133 1387479 logs.go:123] Gathering logs for kube-controller-manager [5b54e04f88c26416f2b2d8d3d73b7067bd8d2dafbbfd236b383241449b084690] ...
	I0816 12:28:30.467162 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b54e04f88c26416f2b2d8d3d73b7067bd8d2dafbbfd236b383241449b084690"
	I0816 12:28:30.540798 1387479 logs.go:123] Gathering logs for kindnet [e9086ec7c665816871c69a6b3c6e1521a95a6c3104276a412228ea68a619e2e4] ...
	I0816 12:28:30.540836 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e9086ec7c665816871c69a6b3c6e1521a95a6c3104276a412228ea68a619e2e4"
	I0816 12:28:30.588084 1387479 logs.go:123] Gathering logs for container status ...
	I0816 12:28:30.588116 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 12:28:33.152208 1387479 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 12:28:33.166410 1387479 api_server.go:72] duration metric: took 2m11.349407348s to wait for apiserver process to appear ...
	I0816 12:28:33.166437 1387479 api_server.go:88] waiting for apiserver healthz status ...
	I0816 12:28:33.166474 1387479 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 12:28:33.166533 1387479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 12:28:33.208796 1387479 cri.go:89] found id: "8254d00c3ba90768ddb8796f419aa64db2c8219f603c6ab2a9315a0148773d80"
	I0816 12:28:33.208816 1387479 cri.go:89] found id: ""
	I0816 12:28:33.208825 1387479 logs.go:276] 1 containers: [8254d00c3ba90768ddb8796f419aa64db2c8219f603c6ab2a9315a0148773d80]
	I0816 12:28:33.208884 1387479 ssh_runner.go:195] Run: which crictl
	I0816 12:28:33.212497 1387479 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 12:28:33.212614 1387479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 12:28:33.253888 1387479 cri.go:89] found id: "20d8a65b34a9095a0855b56977c5799d94cf162bf5fc1ec0c7451fd646c0c928"
	I0816 12:28:33.253960 1387479 cri.go:89] found id: ""
	I0816 12:28:33.253981 1387479 logs.go:276] 1 containers: [20d8a65b34a9095a0855b56977c5799d94cf162bf5fc1ec0c7451fd646c0c928]
	I0816 12:28:33.254071 1387479 ssh_runner.go:195] Run: which crictl
	I0816 12:28:33.258250 1387479 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 12:28:33.258328 1387479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 12:28:33.296218 1387479 cri.go:89] found id: "bbdc93411ee89a2417b6b1e8a74f87b93757d61d1ff049726d814f588c7bd2e8"
	I0816 12:28:33.296241 1387479 cri.go:89] found id: ""
	I0816 12:28:33.296250 1387479 logs.go:276] 1 containers: [bbdc93411ee89a2417b6b1e8a74f87b93757d61d1ff049726d814f588c7bd2e8]
	I0816 12:28:33.296307 1387479 ssh_runner.go:195] Run: which crictl
	I0816 12:28:33.299837 1387479 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 12:28:33.299911 1387479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 12:28:33.340233 1387479 cri.go:89] found id: "5b36378235e83568da22d79bab9fd542bd2d8b811c3ac53446fdd2c0f8439b0a"
	I0816 12:28:33.340256 1387479 cri.go:89] found id: ""
	I0816 12:28:33.340265 1387479 logs.go:276] 1 containers: [5b36378235e83568da22d79bab9fd542bd2d8b811c3ac53446fdd2c0f8439b0a]
	I0816 12:28:33.340321 1387479 ssh_runner.go:195] Run: which crictl
	I0816 12:28:33.343860 1387479 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 12:28:33.343928 1387479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 12:28:33.387651 1387479 cri.go:89] found id: "c0d8bb8efc5a6717ce3e0b4ffce376f1bfe1b809957f70faf484d2798e7fa960"
	I0816 12:28:33.387674 1387479 cri.go:89] found id: ""
	I0816 12:28:33.387682 1387479 logs.go:276] 1 containers: [c0d8bb8efc5a6717ce3e0b4ffce376f1bfe1b809957f70faf484d2798e7fa960]
	I0816 12:28:33.387742 1387479 ssh_runner.go:195] Run: which crictl
	I0816 12:28:33.391358 1387479 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 12:28:33.391431 1387479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 12:28:33.429884 1387479 cri.go:89] found id: "5b54e04f88c26416f2b2d8d3d73b7067bd8d2dafbbfd236b383241449b084690"
	I0816 12:28:33.429910 1387479 cri.go:89] found id: ""
	I0816 12:28:33.429919 1387479 logs.go:276] 1 containers: [5b54e04f88c26416f2b2d8d3d73b7067bd8d2dafbbfd236b383241449b084690]
	I0816 12:28:33.429974 1387479 ssh_runner.go:195] Run: which crictl
	I0816 12:28:33.433533 1387479 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 12:28:33.433637 1387479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 12:28:33.478064 1387479 cri.go:89] found id: "e9086ec7c665816871c69a6b3c6e1521a95a6c3104276a412228ea68a619e2e4"
	I0816 12:28:33.478087 1387479 cri.go:89] found id: ""
	I0816 12:28:33.478095 1387479 logs.go:276] 1 containers: [e9086ec7c665816871c69a6b3c6e1521a95a6c3104276a412228ea68a619e2e4]
	I0816 12:28:33.478149 1387479 ssh_runner.go:195] Run: which crictl
	I0816 12:28:33.481734 1387479 logs.go:123] Gathering logs for kindnet [e9086ec7c665816871c69a6b3c6e1521a95a6c3104276a412228ea68a619e2e4] ...
	I0816 12:28:33.481824 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e9086ec7c665816871c69a6b3c6e1521a95a6c3104276a412228ea68a619e2e4"
	I0816 12:28:33.555958 1387479 logs.go:123] Gathering logs for CRI-O ...
	I0816 12:28:33.555994 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 12:28:33.659868 1387479 logs.go:123] Gathering logs for container status ...
	I0816 12:28:33.659946 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 12:28:33.727812 1387479 logs.go:123] Gathering logs for kubelet ...
	I0816 12:28:33.727843 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 12:28:33.844973 1387479 logs.go:123] Gathering logs for dmesg ...
	I0816 12:28:33.845011 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 12:28:33.864027 1387479 logs.go:123] Gathering logs for kube-apiserver [8254d00c3ba90768ddb8796f419aa64db2c8219f603c6ab2a9315a0148773d80] ...
	I0816 12:28:33.864065 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8254d00c3ba90768ddb8796f419aa64db2c8219f603c6ab2a9315a0148773d80"
	I0816 12:28:33.923895 1387479 logs.go:123] Gathering logs for etcd [20d8a65b34a9095a0855b56977c5799d94cf162bf5fc1ec0c7451fd646c0c928] ...
	I0816 12:28:33.923928 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20d8a65b34a9095a0855b56977c5799d94cf162bf5fc1ec0c7451fd646c0c928"
	I0816 12:28:33.978211 1387479 logs.go:123] Gathering logs for kube-controller-manager [5b54e04f88c26416f2b2d8d3d73b7067bd8d2dafbbfd236b383241449b084690] ...
	I0816 12:28:33.978246 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b54e04f88c26416f2b2d8d3d73b7067bd8d2dafbbfd236b383241449b084690"
	I0816 12:28:34.075227 1387479 logs.go:123] Gathering logs for describe nodes ...
	I0816 12:28:34.075266 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 12:28:34.222992 1387479 logs.go:123] Gathering logs for coredns [bbdc93411ee89a2417b6b1e8a74f87b93757d61d1ff049726d814f588c7bd2e8] ...
	I0816 12:28:34.223024 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bbdc93411ee89a2417b6b1e8a74f87b93757d61d1ff049726d814f588c7bd2e8"
	I0816 12:28:34.264047 1387479 logs.go:123] Gathering logs for kube-scheduler [5b36378235e83568da22d79bab9fd542bd2d8b811c3ac53446fdd2c0f8439b0a] ...
	I0816 12:28:34.264077 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b36378235e83568da22d79bab9fd542bd2d8b811c3ac53446fdd2c0f8439b0a"
	I0816 12:28:34.312494 1387479 logs.go:123] Gathering logs for kube-proxy [c0d8bb8efc5a6717ce3e0b4ffce376f1bfe1b809957f70faf484d2798e7fa960] ...
	I0816 12:28:34.312526 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0d8bb8efc5a6717ce3e0b4ffce376f1bfe1b809957f70faf484d2798e7fa960"
	I0816 12:28:36.853727 1387479 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0816 12:28:36.862140 1387479 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0816 12:28:36.863277 1387479 api_server.go:141] control plane version: v1.31.0
	I0816 12:28:36.863308 1387479 api_server.go:131] duration metric: took 3.696864236s to wait for apiserver health ...
	I0816 12:28:36.863318 1387479 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 12:28:36.863339 1387479 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 12:28:36.863406 1387479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 12:28:36.902968 1387479 cri.go:89] found id: "8254d00c3ba90768ddb8796f419aa64db2c8219f603c6ab2a9315a0148773d80"
	I0816 12:28:36.902991 1387479 cri.go:89] found id: ""
	I0816 12:28:36.902998 1387479 logs.go:276] 1 containers: [8254d00c3ba90768ddb8796f419aa64db2c8219f603c6ab2a9315a0148773d80]
	I0816 12:28:36.903087 1387479 ssh_runner.go:195] Run: which crictl
	I0816 12:28:36.906655 1387479 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 12:28:36.906731 1387479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 12:28:36.945628 1387479 cri.go:89] found id: "20d8a65b34a9095a0855b56977c5799d94cf162bf5fc1ec0c7451fd646c0c928"
	I0816 12:28:36.945694 1387479 cri.go:89] found id: ""
	I0816 12:28:36.945716 1387479 logs.go:276] 1 containers: [20d8a65b34a9095a0855b56977c5799d94cf162bf5fc1ec0c7451fd646c0c928]
	I0816 12:28:36.945829 1387479 ssh_runner.go:195] Run: which crictl
	I0816 12:28:36.949385 1387479 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 12:28:36.949469 1387479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 12:28:36.991004 1387479 cri.go:89] found id: "bbdc93411ee89a2417b6b1e8a74f87b93757d61d1ff049726d814f588c7bd2e8"
	I0816 12:28:36.991029 1387479 cri.go:89] found id: ""
	I0816 12:28:36.991036 1387479 logs.go:276] 1 containers: [bbdc93411ee89a2417b6b1e8a74f87b93757d61d1ff049726d814f588c7bd2e8]
	I0816 12:28:36.991092 1387479 ssh_runner.go:195] Run: which crictl
	I0816 12:28:36.994758 1387479 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 12:28:36.994894 1387479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 12:28:37.052708 1387479 cri.go:89] found id: "5b36378235e83568da22d79bab9fd542bd2d8b811c3ac53446fdd2c0f8439b0a"
	I0816 12:28:37.053822 1387479 cri.go:89] found id: ""
	I0816 12:28:37.053860 1387479 logs.go:276] 1 containers: [5b36378235e83568da22d79bab9fd542bd2d8b811c3ac53446fdd2c0f8439b0a]
	I0816 12:28:37.053930 1387479 ssh_runner.go:195] Run: which crictl
	I0816 12:28:37.059581 1387479 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 12:28:37.059707 1387479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 12:28:37.101933 1387479 cri.go:89] found id: "c0d8bb8efc5a6717ce3e0b4ffce376f1bfe1b809957f70faf484d2798e7fa960"
	I0816 12:28:37.101957 1387479 cri.go:89] found id: ""
	I0816 12:28:37.101965 1387479 logs.go:276] 1 containers: [c0d8bb8efc5a6717ce3e0b4ffce376f1bfe1b809957f70faf484d2798e7fa960]
	I0816 12:28:37.102022 1387479 ssh_runner.go:195] Run: which crictl
	I0816 12:28:37.105575 1387479 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 12:28:37.105648 1387479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 12:28:37.151389 1387479 cri.go:89] found id: "5b54e04f88c26416f2b2d8d3d73b7067bd8d2dafbbfd236b383241449b084690"
	I0816 12:28:37.151414 1387479 cri.go:89] found id: ""
	I0816 12:28:37.151423 1387479 logs.go:276] 1 containers: [5b54e04f88c26416f2b2d8d3d73b7067bd8d2dafbbfd236b383241449b084690]
	I0816 12:28:37.151510 1387479 ssh_runner.go:195] Run: which crictl
	I0816 12:28:37.155322 1387479 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 12:28:37.155423 1387479 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 12:28:37.196293 1387479 cri.go:89] found id: "e9086ec7c665816871c69a6b3c6e1521a95a6c3104276a412228ea68a619e2e4"
	I0816 12:28:37.196326 1387479 cri.go:89] found id: ""
	I0816 12:28:37.196335 1387479 logs.go:276] 1 containers: [e9086ec7c665816871c69a6b3c6e1521a95a6c3104276a412228ea68a619e2e4]
	I0816 12:28:37.196409 1387479 ssh_runner.go:195] Run: which crictl
	I0816 12:28:37.200119 1387479 logs.go:123] Gathering logs for dmesg ...
	I0816 12:28:37.200195 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 12:28:37.217260 1387479 logs.go:123] Gathering logs for kube-apiserver [8254d00c3ba90768ddb8796f419aa64db2c8219f603c6ab2a9315a0148773d80] ...
	I0816 12:28:37.217336 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8254d00c3ba90768ddb8796f419aa64db2c8219f603c6ab2a9315a0148773d80"
	I0816 12:28:37.289119 1387479 logs.go:123] Gathering logs for etcd [20d8a65b34a9095a0855b56977c5799d94cf162bf5fc1ec0c7451fd646c0c928] ...
	I0816 12:28:37.289162 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20d8a65b34a9095a0855b56977c5799d94cf162bf5fc1ec0c7451fd646c0c928"
	I0816 12:28:37.342084 1387479 logs.go:123] Gathering logs for kube-scheduler [5b36378235e83568da22d79bab9fd542bd2d8b811c3ac53446fdd2c0f8439b0a] ...
	I0816 12:28:37.342121 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b36378235e83568da22d79bab9fd542bd2d8b811c3ac53446fdd2c0f8439b0a"
	I0816 12:28:37.394454 1387479 logs.go:123] Gathering logs for kube-controller-manager [5b54e04f88c26416f2b2d8d3d73b7067bd8d2dafbbfd236b383241449b084690] ...
	I0816 12:28:37.394493 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b54e04f88c26416f2b2d8d3d73b7067bd8d2dafbbfd236b383241449b084690"
	I0816 12:28:37.461372 1387479 logs.go:123] Gathering logs for container status ...
	I0816 12:28:37.461412 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 12:28:37.528629 1387479 logs.go:123] Gathering logs for kubelet ...
	I0816 12:28:37.528661 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 12:28:37.637062 1387479 logs.go:123] Gathering logs for describe nodes ...
	I0816 12:28:37.637102 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 12:28:37.775552 1387479 logs.go:123] Gathering logs for coredns [bbdc93411ee89a2417b6b1e8a74f87b93757d61d1ff049726d814f588c7bd2e8] ...
	I0816 12:28:37.775584 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bbdc93411ee89a2417b6b1e8a74f87b93757d61d1ff049726d814f588c7bd2e8"
	I0816 12:28:37.824816 1387479 logs.go:123] Gathering logs for kube-proxy [c0d8bb8efc5a6717ce3e0b4ffce376f1bfe1b809957f70faf484d2798e7fa960] ...
	I0816 12:28:37.824849 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0d8bb8efc5a6717ce3e0b4ffce376f1bfe1b809957f70faf484d2798e7fa960"
	I0816 12:28:37.865321 1387479 logs.go:123] Gathering logs for kindnet [e9086ec7c665816871c69a6b3c6e1521a95a6c3104276a412228ea68a619e2e4] ...
	I0816 12:28:37.865351 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e9086ec7c665816871c69a6b3c6e1521a95a6c3104276a412228ea68a619e2e4"
	I0816 12:28:37.922440 1387479 logs.go:123] Gathering logs for CRI-O ...
	I0816 12:28:37.922476 1387479 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 12:28:40.538913 1387479 system_pods.go:59] 18 kube-system pods found
	I0816 12:28:40.538957 1387479 system_pods.go:61] "coredns-6f6b679f8f-8ctjp" [1dd36daf-8683-4242-8ac3-9a037d03b77d] Running
	I0816 12:28:40.538965 1387479 system_pods.go:61] "csi-hostpath-attacher-0" [faaebc96-a57a-4ba1-9b1b-9af9eda2bfaa] Running
	I0816 12:28:40.538970 1387479 system_pods.go:61] "csi-hostpath-resizer-0" [5c750c14-1267-4831-b07d-f1340d77d353] Running
	I0816 12:28:40.538975 1387479 system_pods.go:61] "csi-hostpathplugin-82nxb" [c0368736-0e64-416c-8421-8681c40ed712] Running
	I0816 12:28:40.538979 1387479 system_pods.go:61] "etcd-addons-606349" [e11563de-8441-4a42-9c49-ee724454e4d3] Running
	I0816 12:28:40.538983 1387479 system_pods.go:61] "kindnet-5jgmz" [3f101520-e1b8-4170-8ca5-94d6a290443e] Running
	I0816 12:28:40.538988 1387479 system_pods.go:61] "kube-apiserver-addons-606349" [176e3fad-50a6-4223-b90c-3ef3e52c7289] Running
	I0816 12:28:40.538992 1387479 system_pods.go:61] "kube-controller-manager-addons-606349" [607563de-f7a6-4d48-b359-2a6bd36a1252] Running
	I0816 12:28:40.538998 1387479 system_pods.go:61] "kube-ingress-dns-minikube" [ff0ffcea-ad8a-44e3-a010-29d571f3bd06] Running
	I0816 12:28:40.539002 1387479 system_pods.go:61] "kube-proxy-vjdhm" [f62a6b13-cf4c-49e6-b710-dcc4bdb8d830] Running
	I0816 12:28:40.539006 1387479 system_pods.go:61] "kube-scheduler-addons-606349" [c0c34f3e-eee1-4bd3-bac1-6d70f95c1cdd] Running
	I0816 12:28:40.539013 1387479 system_pods.go:61] "metrics-server-8988944d9-lfhc7" [93c15fce-49db-484e-817d-4f2f088bd4e5] Running
	I0816 12:28:40.539017 1387479 system_pods.go:61] "nvidia-device-plugin-daemonset-tlscx" [50afed3c-442a-4c9e-b404-875b12dd96e9] Running
	I0816 12:28:40.539021 1387479 system_pods.go:61] "registry-6fb4cdfc84-pbm8s" [73faa728-22c2-4a32-a43d-85763f935998] Running
	I0816 12:28:40.539026 1387479 system_pods.go:61] "registry-proxy-xqwvx" [a9e788b9-88d0-492b-8001-c0da62bb7adc] Running
	I0816 12:28:40.539038 1387479 system_pods.go:61] "snapshot-controller-56fcc65765-mjvvx" [ce222d15-6641-4c9b-b583-6c9c45a34880] Running
	I0816 12:28:40.539042 1387479 system_pods.go:61] "snapshot-controller-56fcc65765-q8vp5" [83bb85d3-0be7-46b5-86a9-aa9f949b555f] Running
	I0816 12:28:40.539046 1387479 system_pods.go:61] "storage-provisioner" [42e6183e-b46d-4e8d-8c94-b53653e34dca] Running
	I0816 12:28:40.539057 1387479 system_pods.go:74] duration metric: took 3.675731692s to wait for pod list to return data ...
	I0816 12:28:40.539069 1387479 default_sa.go:34] waiting for default service account to be created ...
	I0816 12:28:40.541919 1387479 default_sa.go:45] found service account: "default"
	I0816 12:28:40.541954 1387479 default_sa.go:55] duration metric: took 2.875563ms for default service account to be created ...
	I0816 12:28:40.541965 1387479 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 12:28:40.552339 1387479 system_pods.go:86] 18 kube-system pods found
	I0816 12:28:40.552388 1387479 system_pods.go:89] "coredns-6f6b679f8f-8ctjp" [1dd36daf-8683-4242-8ac3-9a037d03b77d] Running
	I0816 12:28:40.552398 1387479 system_pods.go:89] "csi-hostpath-attacher-0" [faaebc96-a57a-4ba1-9b1b-9af9eda2bfaa] Running
	I0816 12:28:40.552403 1387479 system_pods.go:89] "csi-hostpath-resizer-0" [5c750c14-1267-4831-b07d-f1340d77d353] Running
	I0816 12:28:40.552407 1387479 system_pods.go:89] "csi-hostpathplugin-82nxb" [c0368736-0e64-416c-8421-8681c40ed712] Running
	I0816 12:28:40.552413 1387479 system_pods.go:89] "etcd-addons-606349" [e11563de-8441-4a42-9c49-ee724454e4d3] Running
	I0816 12:28:40.552417 1387479 system_pods.go:89] "kindnet-5jgmz" [3f101520-e1b8-4170-8ca5-94d6a290443e] Running
	I0816 12:28:40.552422 1387479 system_pods.go:89] "kube-apiserver-addons-606349" [176e3fad-50a6-4223-b90c-3ef3e52c7289] Running
	I0816 12:28:40.552427 1387479 system_pods.go:89] "kube-controller-manager-addons-606349" [607563de-f7a6-4d48-b359-2a6bd36a1252] Running
	I0816 12:28:40.552431 1387479 system_pods.go:89] "kube-ingress-dns-minikube" [ff0ffcea-ad8a-44e3-a010-29d571f3bd06] Running
	I0816 12:28:40.552436 1387479 system_pods.go:89] "kube-proxy-vjdhm" [f62a6b13-cf4c-49e6-b710-dcc4bdb8d830] Running
	I0816 12:28:40.552440 1387479 system_pods.go:89] "kube-scheduler-addons-606349" [c0c34f3e-eee1-4bd3-bac1-6d70f95c1cdd] Running
	I0816 12:28:40.552446 1387479 system_pods.go:89] "metrics-server-8988944d9-lfhc7" [93c15fce-49db-484e-817d-4f2f088bd4e5] Running
	I0816 12:28:40.552451 1387479 system_pods.go:89] "nvidia-device-plugin-daemonset-tlscx" [50afed3c-442a-4c9e-b404-875b12dd96e9] Running
	I0816 12:28:40.552455 1387479 system_pods.go:89] "registry-6fb4cdfc84-pbm8s" [73faa728-22c2-4a32-a43d-85763f935998] Running
	I0816 12:28:40.552461 1387479 system_pods.go:89] "registry-proxy-xqwvx" [a9e788b9-88d0-492b-8001-c0da62bb7adc] Running
	I0816 12:28:40.552465 1387479 system_pods.go:89] "snapshot-controller-56fcc65765-mjvvx" [ce222d15-6641-4c9b-b583-6c9c45a34880] Running
	I0816 12:28:40.552469 1387479 system_pods.go:89] "snapshot-controller-56fcc65765-q8vp5" [83bb85d3-0be7-46b5-86a9-aa9f949b555f] Running
	I0816 12:28:40.552473 1387479 system_pods.go:89] "storage-provisioner" [42e6183e-b46d-4e8d-8c94-b53653e34dca] Running
	I0816 12:28:40.552484 1387479 system_pods.go:126] duration metric: took 10.512296ms to wait for k8s-apps to be running ...
	I0816 12:28:40.552492 1387479 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 12:28:40.552557 1387479 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 12:28:40.565507 1387479 system_svc.go:56] duration metric: took 13.005118ms WaitForService to wait for kubelet
	I0816 12:28:40.565560 1387479 kubeadm.go:582] duration metric: took 2m18.748562439s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 12:28:40.565583 1387479 node_conditions.go:102] verifying NodePressure condition ...
	I0816 12:28:40.569211 1387479 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0816 12:28:40.569246 1387479 node_conditions.go:123] node cpu capacity is 2
	I0816 12:28:40.569261 1387479 node_conditions.go:105] duration metric: took 3.670413ms to run NodePressure ...
	I0816 12:28:40.569273 1387479 start.go:241] waiting for startup goroutines ...
	I0816 12:28:40.569281 1387479 start.go:246] waiting for cluster config update ...
	I0816 12:28:40.569298 1387479 start.go:255] writing updated cluster config ...
	I0816 12:28:40.569618 1387479 ssh_runner.go:195] Run: rm -f paused
	I0816 12:28:40.917637 1387479 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 12:28:40.921597 1387479 out.go:177] * Done! kubectl is now configured to use "addons-606349" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 16 12:33:26 addons-606349 crio[958]: time="2024-08-16 12:33:26.432553568Z" level=info msg="Starting container: 58e97fbe1770ecc2b0af3cc95e5e23a85cb41bedbb19241d42b7a9992b523bdf" id=c8f42069-bd84-46a7-9742-e30b6614a303 name=/runtime.v1.RuntimeService/StartContainer
	Aug 16 12:33:26 addons-606349 crio[958]: time="2024-08-16 12:33:26.447381832Z" level=info msg="Started container" PID=8298 containerID=58e97fbe1770ecc2b0af3cc95e5e23a85cb41bedbb19241d42b7a9992b523bdf description=headlamp/headlamp-57fb76fcdb-8dczv/headlamp id=c8f42069-bd84-46a7-9742-e30b6614a303 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0b48c08182da8d67aea3d09e940fe9ec77f50bd83dc3bdc941f5b3d7988caefb
	Aug 16 12:33:34 addons-606349 crio[958]: time="2024-08-16 12:33:34.293300148Z" level=info msg="Stopping container: 58e97fbe1770ecc2b0af3cc95e5e23a85cb41bedbb19241d42b7a9992b523bdf (timeout: 30s)" id=fc8652cd-1d05-4e54-b1ff-4db43bd9203e name=/runtime.v1.RuntimeService/StopContainer
	Aug 16 12:33:34 addons-606349 conmon[8286]: conmon 58e97fbe1770ecc2b0af <ninfo>: container 8298 exited with status 2
	Aug 16 12:33:34 addons-606349 crio[958]: time="2024-08-16 12:33:34.320829000Z" level=info msg="Stopped container 58e97fbe1770ecc2b0af3cc95e5e23a85cb41bedbb19241d42b7a9992b523bdf: headlamp/headlamp-57fb76fcdb-8dczv/headlamp" id=fc8652cd-1d05-4e54-b1ff-4db43bd9203e name=/runtime.v1.RuntimeService/StopContainer
	Aug 16 12:33:34 addons-606349 crio[958]: time="2024-08-16 12:33:34.321385532Z" level=info msg="Stopping pod sandbox: 0b48c08182da8d67aea3d09e940fe9ec77f50bd83dc3bdc941f5b3d7988caefb" id=0f78f53f-0e61-46a7-afe7-3a76812279a4 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 16 12:33:34 addons-606349 crio[958]: time="2024-08-16 12:33:34.321596377Z" level=info msg="Got pod network &{Name:headlamp-57fb76fcdb-8dczv Namespace:headlamp ID:0b48c08182da8d67aea3d09e940fe9ec77f50bd83dc3bdc941f5b3d7988caefb UID:5cf682b2-d9a3-465d-9945-29f18354cb72 NetNS:/var/run/netns/9d2f0de5-1af1-463c-8e8f-bab6b1d49077 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Aug 16 12:33:34 addons-606349 crio[958]: time="2024-08-16 12:33:34.321725787Z" level=info msg="Deleting pod headlamp_headlamp-57fb76fcdb-8dczv from CNI network \"kindnet\" (type=ptp)"
	Aug 16 12:33:34 addons-606349 crio[958]: time="2024-08-16 12:33:34.340314655Z" level=info msg="Stopped pod sandbox: 0b48c08182da8d67aea3d09e940fe9ec77f50bd83dc3bdc941f5b3d7988caefb" id=0f78f53f-0e61-46a7-afe7-3a76812279a4 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 16 12:33:34 addons-606349 crio[958]: time="2024-08-16 12:33:34.464840107Z" level=info msg="Removing container: 58e97fbe1770ecc2b0af3cc95e5e23a85cb41bedbb19241d42b7a9992b523bdf" id=40d7b094-bb0a-47d4-87fb-a2eb20c90e9a name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 16 12:33:34 addons-606349 crio[958]: time="2024-08-16 12:33:34.479939335Z" level=info msg="Removed container 58e97fbe1770ecc2b0af3cc95e5e23a85cb41bedbb19241d42b7a9992b523bdf: headlamp/headlamp-57fb76fcdb-8dczv/headlamp" id=40d7b094-bb0a-47d4-87fb-a2eb20c90e9a name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 16 12:34:18 addons-606349 crio[958]: time="2024-08-16 12:34:18.280240812Z" level=info msg="Stopping pod sandbox: 2506f308b0ef3dc688bd4a2f33aa012499a04d244287dc297a806acbbd8577d3" id=a6f9bc31-f730-4224-8340-4f4f6617c6d3 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 16 12:34:18 addons-606349 crio[958]: time="2024-08-16 12:34:18.280293152Z" level=info msg="Stopped pod sandbox (already stopped): 2506f308b0ef3dc688bd4a2f33aa012499a04d244287dc297a806acbbd8577d3" id=a6f9bc31-f730-4224-8340-4f4f6617c6d3 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 16 12:34:18 addons-606349 crio[958]: time="2024-08-16 12:34:18.281234780Z" level=info msg="Removing pod sandbox: 2506f308b0ef3dc688bd4a2f33aa012499a04d244287dc297a806acbbd8577d3" id=84137cbb-b40f-4a6c-b962-8d5ec9df129d name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 16 12:34:18 addons-606349 crio[958]: time="2024-08-16 12:34:18.289334913Z" level=info msg="Removed pod sandbox: 2506f308b0ef3dc688bd4a2f33aa012499a04d244287dc297a806acbbd8577d3" id=84137cbb-b40f-4a6c-b962-8d5ec9df129d name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 16 12:34:18 addons-606349 crio[958]: time="2024-08-16 12:34:18.290057606Z" level=info msg="Stopping pod sandbox: 0b48c08182da8d67aea3d09e940fe9ec77f50bd83dc3bdc941f5b3d7988caefb" id=cbb13567-00e6-4acc-9847-b7af586c2460 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 16 12:34:18 addons-606349 crio[958]: time="2024-08-16 12:34:18.290109404Z" level=info msg="Stopped pod sandbox (already stopped): 0b48c08182da8d67aea3d09e940fe9ec77f50bd83dc3bdc941f5b3d7988caefb" id=cbb13567-00e6-4acc-9847-b7af586c2460 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 16 12:34:18 addons-606349 crio[958]: time="2024-08-16 12:34:18.290513306Z" level=info msg="Removing pod sandbox: 0b48c08182da8d67aea3d09e940fe9ec77f50bd83dc3bdc941f5b3d7988caefb" id=51590f8c-be81-4ac4-823d-9a9bb5e28b0e name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 16 12:34:18 addons-606349 crio[958]: time="2024-08-16 12:34:18.304594953Z" level=info msg="Removed pod sandbox: 0b48c08182da8d67aea3d09e940fe9ec77f50bd83dc3bdc941f5b3d7988caefb" id=51590f8c-be81-4ac4-823d-9a9bb5e28b0e name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 16 12:34:19 addons-606349 crio[958]: time="2024-08-16 12:34:19.366620093Z" level=info msg="Stopping container: e4c0ee4099d2509173ac20fb4fadeec7b279cdb2d9be4e2104cf38ffb8e1e253 (timeout: 30s)" id=784e0741-2eaf-417b-907c-adcbfc9c1cab name=/runtime.v1.RuntimeService/StopContainer
	Aug 16 12:34:20 addons-606349 crio[958]: time="2024-08-16 12:34:20.547135889Z" level=info msg="Stopped container e4c0ee4099d2509173ac20fb4fadeec7b279cdb2d9be4e2104cf38ffb8e1e253: kube-system/metrics-server-8988944d9-lfhc7/metrics-server" id=784e0741-2eaf-417b-907c-adcbfc9c1cab name=/runtime.v1.RuntimeService/StopContainer
	Aug 16 12:34:20 addons-606349 crio[958]: time="2024-08-16 12:34:20.548021944Z" level=info msg="Stopping pod sandbox: cd02c70718582bd7392f36a5cf88ed2bff85c502693491ea8df4bd5efa5982ec" id=b5922d71-5978-41ab-b839-1d34f5dbf0a6 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 16 12:34:20 addons-606349 crio[958]: time="2024-08-16 12:34:20.548249232Z" level=info msg="Got pod network &{Name:metrics-server-8988944d9-lfhc7 Namespace:kube-system ID:cd02c70718582bd7392f36a5cf88ed2bff85c502693491ea8df4bd5efa5982ec UID:93c15fce-49db-484e-817d-4f2f088bd4e5 NetNS:/var/run/netns/fa391c8d-6a2d-4343-a618-fa6ea99b4e5c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Aug 16 12:34:20 addons-606349 crio[958]: time="2024-08-16 12:34:20.548389621Z" level=info msg="Deleting pod kube-system_metrics-server-8988944d9-lfhc7 from CNI network \"kindnet\" (type=ptp)"
	Aug 16 12:34:20 addons-606349 crio[958]: time="2024-08-16 12:34:20.594556315Z" level=info msg="Stopped pod sandbox: cd02c70718582bd7392f36a5cf88ed2bff85c502693491ea8df4bd5efa5982ec" id=b5922d71-5978-41ab-b839-1d34f5dbf0a6 name=/runtime.v1.RuntimeService/StopPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	52b2678acd353       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   About a minute ago   Running             hello-world-app           0                   e4c59245ac3a2       hello-world-app-55bf9c44b4-ktmlr
	ffbdceed8df31       docker.io/library/nginx@sha256:ba188f579f7a2638229e326e78c957a185630e303757813ef1ad7aac1b8248b6                         4 minutes ago        Running             nginx                     0                   e1db1efc0bbd6       nginx
	1745a4491518b       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     5 minutes ago        Running             busybox                   0                   413248ad425c2       busybox
	e4c0ee4099d25       registry.k8s.io/metrics-server/metrics-server@sha256:7f0fc3565b6d4655d078bb8e250d0423d7c79aeb05fbc71e1ffa6ff664264d70   6 minutes ago        Exited              metrics-server            0                   cd02c70718582       metrics-server-8988944d9-lfhc7
	a9f056a1a1096       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98        7 minutes ago        Running             local-path-provisioner    0                   da35a05534ea1       local-path-provisioner-86d989889c-jx4xd
	bbdc93411ee89       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93                                                        7 minutes ago        Running             coredns                   0                   cffeb7f91719f       coredns-6f6b679f8f-8ctjp
	21cba91f907bb       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                        7 minutes ago        Running             storage-provisioner       0                   9885489f05ffd       storage-provisioner
	e9086ec7c6658       docker.io/kindest/kindnetd@sha256:4d39335073da9a0b82be8e01028f0aa75aff16caff2e2d8889d0effd579a6f64                      7 minutes ago        Running             kindnet-cni               0                   7300d870e06dd       kindnet-5jgmz
	c0d8bb8efc5a6       71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89                                                        7 minutes ago        Running             kube-proxy                0                   f63c51380eace       kube-proxy-vjdhm
	20d8a65b34a90       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                        8 minutes ago        Running             etcd                      0                   393580ac3310e       etcd-addons-606349
	5b36378235e83       fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb                                                        8 minutes ago        Running             kube-scheduler            0                   3d52049ea4db5       kube-scheduler-addons-606349
	8254d00c3ba90       cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388                                                        8 minutes ago        Running             kube-apiserver            0                   ea2a34eb927e8       kube-apiserver-addons-606349
	5b54e04f88c26       fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd                                                        8 minutes ago        Running             kube-controller-manager   0                   8b0ea0c0fedc7       kube-controller-manager-addons-606349
	
	
	==> coredns [bbdc93411ee89a2417b6b1e8a74f87b93757d61d1ff049726d814f588c7bd2e8] <==
	[INFO] 10.244.0.18:56454 - 57187 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00251171s
	[INFO] 10.244.0.18:39595 - 13081 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000146198s
	[INFO] 10.244.0.18:39595 - 61982 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000092799s
	[INFO] 10.244.0.18:52789 - 35962 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000107454s
	[INFO] 10.244.0.18:52789 - 20350 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000153115s
	[INFO] 10.244.0.18:38983 - 3682 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000047589s
	[INFO] 10.244.0.18:38983 - 10848 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000034084s
	[INFO] 10.244.0.18:38226 - 38572 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00004667s
	[INFO] 10.244.0.18:38226 - 27055 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000034264s
	[INFO] 10.244.0.18:54811 - 51498 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001744363s
	[INFO] 10.244.0.18:54811 - 53284 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00136552s
	[INFO] 10.244.0.18:57115 - 46850 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000093431s
	[INFO] 10.244.0.18:57115 - 64284 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000044988s
	[INFO] 10.244.0.19:40719 - 43595 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000263875s
	[INFO] 10.244.0.19:37644 - 15119 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000410392s
	[INFO] 10.244.0.19:58790 - 54113 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00016863s
	[INFO] 10.244.0.19:33847 - 44098 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000091856s
	[INFO] 10.244.0.19:48670 - 33356 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000130936s
	[INFO] 10.244.0.19:53728 - 63183 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000239653s
	[INFO] 10.244.0.19:53416 - 55352 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003389405s
	[INFO] 10.244.0.19:48410 - 21450 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.004221703s
	[INFO] 10.244.0.19:42844 - 56534 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001826866s
	[INFO] 10.244.0.19:38615 - 30667 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002585662s
	[INFO] 10.244.0.22:52400 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00015684s
	[INFO] 10.244.0.22:36078 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000098173s
	
	
	==> describe nodes <==
	Name:               addons-606349
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-606349
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab84f9bc76071a77c857a14f5c66dccc01002b05
	                    minikube.k8s.io/name=addons-606349
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_16T12_26_18_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-606349
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 12:26:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-606349
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 12:34:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 12:33:56 +0000   Fri, 16 Aug 2024 12:26:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 12:33:56 +0000   Fri, 16 Aug 2024 12:26:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 12:33:56 +0000   Fri, 16 Aug 2024 12:26:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 12:33:56 +0000   Fri, 16 Aug 2024 12:27:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-606349
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022360Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022360Ki
	  pods:               110
	System Info:
	  Machine ID:                 7716b7cc286d4cd2b024d8361134384f
	  System UUID:                a1c189a3-b18b-4e19-b9eb-1cda8c1cacc5
	  Boot ID:                    cb16ac7a-0cca-4a0e-b7d0-05329bf090df
	  Kernel Version:             5.15.0-1067-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m39s
	  default                     hello-world-app-55bf9c44b4-ktmlr           0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 coredns-6f6b679f8f-8ctjp                   100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     7m58s
	  kube-system                 etcd-addons-606349                         100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8m3s
	  kube-system                 kindnet-5jgmz                              100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      7m59s
	  kube-system                 kube-apiserver-addons-606349               250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m3s
	  kube-system                 kube-controller-manager-addons-606349      200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m3s
	  kube-system                 kube-proxy-vjdhm                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m58s
	  kube-system                 kube-scheduler-addons-606349               100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m3s
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m53s
	  local-path-storage          local-path-provisioner-86d989889c-jx4xd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 7m52s  kube-proxy       
	  Normal   Starting                 8m4s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m4s   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  8m3s   kubelet          Node addons-606349 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m3s   kubelet          Node addons-606349 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m3s   kubelet          Node addons-606349 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m59s  node-controller  Node addons-606349 event: Registered Node addons-606349 in Controller
	  Normal   NodeReady                7m12s  kubelet          Node addons-606349 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug16 10:02] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	[Aug16 11:25] FS-Cache: Duplicate cookie detected
	[  +0.000691] FS-Cache: O-cookie c=0000005a [p=00000002 fl=222 nc=0 na=1]
	[  +0.000926] FS-Cache: O-cookie d=00000000a864430e{9P.session} n=000000009bb6de5b
	[  +0.001091] FS-Cache: O-key=[10] '34333033313135373335'
	[  +0.000765] FS-Cache: N-cookie c=0000005b [p=00000002 fl=2 nc=0 na=1]
	[  +0.000894] FS-Cache: N-cookie d=00000000a864430e{9P.session} n=000000006a6ee473
	[  +0.001065] FS-Cache: N-key=[10] '34333033313135373335'
	[Aug16 11:58] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	[  +0.866060] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [20d8a65b34a9095a0855b56977c5799d94cf162bf5fc1ec0c7451fd646c0c928] <==
	{"level":"warn","ts":"2024-08-16T12:26:23.325124Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T12:26:22.810713Z","time spent":"513.884532ms","remote":"127.0.0.1:40674","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":7425,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-addons-606349\" mod_revision:299 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-addons-606349\" value_size:7362 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-addons-606349\" > >"}
	{"level":"info","ts":"2024-08-16T12:26:23.170145Z","caller":"traceutil/trace.go:171","msg":"trace[1433172012] transaction","detail":"{read_only:false; response_revision:330; number_of_response:1; }","duration":"510.694829ms","start":"2024-08-16T12:26:22.659437Z","end":"2024-08-16T12:26:23.170131Z","steps":["trace[1433172012] 'process raft request'  (duration: 495.866817ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T12:26:23.335901Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T12:26:22.659420Z","time spent":"676.253591ms","remote":"127.0.0.1:40580","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":669,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/kube-proxy.17ec3523154eae63\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-proxy.17ec3523154eae63\" value_size:595 lease:8128031248180017206 >> failure:<>"}
	{"level":"info","ts":"2024-08-16T12:26:23.170191Z","caller":"traceutil/trace.go:171","msg":"trace[661420716] transaction","detail":"{read_only:false; response_revision:331; number_of_response:1; }","duration":"349.997731ms","start":"2024-08-16T12:26:22.820168Z","end":"2024-08-16T12:26:23.170165Z","steps":["trace[661420716] 'process raft request'  (duration: 335.180952ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T12:26:23.337831Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T12:26:22.820149Z","time spent":"517.53014ms","remote":"127.0.0.1:40580","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":692,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/kube-proxy-vjdhm.17ec352330281631\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-proxy-vjdhm.17ec352330281631\" value_size:612 lease:8128031248180017206 >> failure:<>"}
	{"level":"warn","ts":"2024-08-16T12:26:23.345892Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.327106ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/coredns\" ","response":"range_response_count:1 size:612"}
	{"level":"info","ts":"2024-08-16T12:26:23.346067Z","caller":"traceutil/trace.go:171","msg":"trace[986765240] range","detail":"{range_begin:/registry/configmaps/kube-system/coredns; range_end:; response_count:1; response_revision:333; }","duration":"159.515773ms","start":"2024-08-16T12:26:23.186539Z","end":"2024-08-16T12:26:23.346055Z","steps":["trace[986765240] 'agreement among raft nodes before linearized reading'  (duration: 159.15992ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T12:26:23.170250Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"359.003454ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/kube-proxy\" ","response":"range_response_count:1 size:185"}
	{"level":"info","ts":"2024-08-16T12:26:23.346664Z","caller":"traceutil/trace.go:171","msg":"trace[1165937770] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kube-proxy; range_end:; response_count:1; response_revision:331; }","duration":"535.430571ms","start":"2024-08-16T12:26:22.811223Z","end":"2024-08-16T12:26:23.346653Z","steps":["trace[1165937770] 'agreement among raft nodes before linearized reading'  (duration: 358.979265ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T12:26:23.346706Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T12:26:22.811202Z","time spent":"535.490189ms","remote":"127.0.0.1:40700","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":1,"response size":209,"request content":"key:\"/registry/serviceaccounts/kube-system/kube-proxy\" "}
	{"level":"warn","ts":"2024-08-16T12:26:23.170284Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"359.645968ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-16T12:26:23.377963Z","caller":"traceutil/trace.go:171","msg":"trace[1751129751] range","detail":"{range_begin:/registry/namespaces; range_end:; response_count:0; response_revision:331; }","duration":"567.308378ms","start":"2024-08-16T12:26:22.810632Z","end":"2024-08-16T12:26:23.377940Z","steps":["trace[1751129751] 'agreement among raft nodes before linearized reading'  (duration: 359.635194ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T12:26:23.378419Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T12:26:22.810615Z","time spent":"567.503281ms","remote":"127.0.0.1:40602","response type":"/etcdserverpb.KV/Range","request count":0,"request size":24,"response count":0,"response size":29,"request content":"key:\"/registry/namespaces\" limit:1 "}
	{"level":"info","ts":"2024-08-16T12:26:23.170742Z","caller":"traceutil/trace.go:171","msg":"trace[48779832] transaction","detail":"{read_only:false; response_revision:333; number_of_response:1; }","duration":"121.521878ms","start":"2024-08-16T12:26:23.049203Z","end":"2024-08-16T12:26:23.170725Z","steps":["trace[48779832] 'process raft request'  (duration: 121.385338ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T12:26:23.170773Z","caller":"traceutil/trace.go:171","msg":"trace[81475285] transaction","detail":"{read_only:false; response_revision:332; number_of_response:1; }","duration":"174.702117ms","start":"2024-08-16T12:26:22.996061Z","end":"2024-08-16T12:26:23.170763Z","steps":["trace[81475285] 'process raft request'  (duration: 174.435871ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T12:26:23.395781Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T12:26:22.996042Z","time spent":"399.677091ms","remote":"127.0.0.1:40580","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":704,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/coredns-6f6b679f8f.17ec35233ef26562\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/coredns-6f6b679f8f.17ec35233ef26562\" value_size:622 lease:8128031248180017206 >> failure:<>"}
	{"level":"warn","ts":"2024-08-16T12:26:23.379741Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T12:26:23.049178Z","time spent":"330.520558ms","remote":"127.0.0.1:40674","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3505,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/coredns-6f6b679f8f-8ctjp\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-6f6b679f8f-8ctjp\" value_size:3446 >> failure:<>"}
	{"level":"info","ts":"2024-08-16T12:26:23.442194Z","caller":"traceutil/trace.go:171","msg":"trace[546240696] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-5jgmz; range_end:; response_count:1; response_revision:331; }","duration":"455.055598ms","start":"2024-08-16T12:26:22.810484Z","end":"2024-08-16T12:26:23.265540Z","steps":["trace[546240696] 'agreement among raft nodes before linearized reading'  (duration: 347.920414ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T12:26:23.449916Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T12:26:22.810437Z","time spent":"639.177965ms","remote":"127.0.0.1:40674","response type":"/etcdserverpb.KV/Range","request count":0,"request size":42,"response count":1,"response size":3713,"request content":"key:\"/registry/pods/kube-system/kindnet-5jgmz\" "}
	{"level":"info","ts":"2024-08-16T12:26:24.913517Z","caller":"traceutil/trace.go:171","msg":"trace[185896217] transaction","detail":"{read_only:false; response_revision:341; number_of_response:1; }","duration":"104.995525ms","start":"2024-08-16T12:26:24.808485Z","end":"2024-08-16T12:26:24.913481Z","steps":["trace[185896217] 'process raft request'  (duration: 81.652083ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T12:26:24.987912Z","caller":"traceutil/trace.go:171","msg":"trace[1717652546] transaction","detail":"{read_only:false; response_revision:342; number_of_response:1; }","duration":"160.954524ms","start":"2024-08-16T12:26:24.817926Z","end":"2024-08-16T12:26:24.978880Z","steps":["trace[1717652546] 'process raft request'  (duration: 131.68774ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T12:26:25.090390Z","caller":"traceutil/trace.go:171","msg":"trace[160194359] linearizableReadLoop","detail":"{readStateIndex:352; appliedIndex:352; }","duration":"139.510588ms","start":"2024-08-16T12:26:24.950859Z","end":"2024-08-16T12:26:25.090369Z","steps":["trace[160194359] 'read index received'  (duration: 139.501628ms)","trace[160194359] 'applied index is now lower than readState.Index'  (duration: 7.598µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-16T12:26:25.098425Z","caller":"traceutil/trace.go:171","msg":"trace[1348143978] transaction","detail":"{read_only:false; response_revision:343; number_of_response:1; }","duration":"147.46639ms","start":"2024-08-16T12:26:24.950937Z","end":"2024-08-16T12:26:25.098404Z","steps":["trace[1348143978] 'process raft request'  (duration: 147.168194ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T12:26:25.098548Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.660727ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-16T12:26:25.102781Z","caller":"traceutil/trace.go:171","msg":"trace[1234012080] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:343; }","duration":"151.913748ms","start":"2024-08-16T12:26:24.950855Z","end":"2024-08-16T12:26:25.102768Z","steps":["trace[1234012080] 'agreement among raft nodes before linearized reading'  (duration: 147.618922ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:34:21 up 10:16,  0 users,  load average: 0.52, 0.99, 1.71
	Linux addons-606349 5.15.0-1067-aws #73~20.04.1-Ubuntu SMP Wed Jul 24 17:31:05 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [e9086ec7c665816871c69a6b3c6e1521a95a6c3104276a412228ea68a619e2e4] <==
	E0816 12:33:17.253344       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0816 12:33:18.238395       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0816 12:33:18.238433       1 main.go:299] handling current node
	W0816 12:33:18.930872       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0816 12:33:18.930906       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	W0816 12:33:19.800504       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0816 12:33:19.800544       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0816 12:33:28.238528       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0816 12:33:28.238565       1 main.go:299] handling current node
	I0816 12:33:38.238544       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0816 12:33:38.238581       1 main.go:299] handling current node
	I0816 12:33:48.238675       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0816 12:33:48.238712       1 main.go:299] handling current node
	I0816 12:33:58.238928       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0816 12:33:58.238964       1 main.go:299] handling current node
	I0816 12:34:08.238893       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0816 12:34:08.238931       1 main.go:299] handling current node
	W0816 12:34:08.472368       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0816 12:34:08.472405       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	W0816 12:34:11.770188       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0816 12:34:11.770223       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	W0816 12:34:13.783532       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0816 12:34:13.783565       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0816 12:34:18.238891       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0816 12:34:18.238927       1 main.go:299] handling current node
	
	
	==> kube-apiserver [8254d00c3ba90768ddb8796f419aa64db2c8219f603c6ab2a9315a0148773d80] <==
	E0816 12:28:51.635453       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:37768: use of closed network connection
	E0816 12:28:52.041028       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:37790: use of closed network connection
	E0816 12:29:00.842674       1 watch.go:250] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	E0816 12:29:16.096740       1 watch.go:250] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I0816 12:29:25.410773       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0816 12:30:02.524141       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0816 12:30:02.524275       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0816 12:30:02.551548       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0816 12:30:02.552849       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0816 12:30:02.572124       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0816 12:30:02.572261       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0816 12:30:02.671300       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0816 12:30:02.673290       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0816 12:30:02.696107       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0816 12:30:02.696219       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0816 12:30:03.671562       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0816 12:30:03.696921       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0816 12:30:03.714960       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0816 12:30:09.506995       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0816 12:30:10.550416       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0816 12:30:15.142756       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0816 12:30:15.461904       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.85.14"}
	I0816 12:32:35.369720       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.107.173.120"}
	E0816 12:32:38.014124       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	I0816 12:33:22.671083       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.111.228.246"}
	
	
	==> kube-controller-manager [5b54e04f88c26416f2b2d8d3d73b7067bd8d2dafbbfd236b383241449b084690] <==
	W0816 12:33:21.604200       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 12:33:21.604255       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0816 12:33:21.677887       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/cloud-spanner-emulator-c4bc9b5f8" duration="7.237µs"
	I0816 12:33:22.736465       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="23.530522ms"
	E0816 12:33:22.737141       1 replica_set.go:560] "Unhandled Error" err="sync \"headlamp/headlamp-57fb76fcdb\" failed with pods \"headlamp-57fb76fcdb-\" is forbidden: error looking up service account headlamp/headlamp: serviceaccount \"headlamp\" not found" logger="UnhandledError"
	I0816 12:33:22.782755       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="43.87838ms"
	I0816 12:33:22.792203       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="9.013821ms"
	I0816 12:33:22.813162       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="20.82901ms"
	I0816 12:33:22.813542       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="73.312µs"
	I0816 12:33:26.259690       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-606349"
	I0816 12:33:27.462013       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="58.371µs"
	I0816 12:33:27.486658       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="10.794562ms"
	I0816 12:33:27.489797       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="81.936µs"
	W0816 12:33:32.779588       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 12:33:32.779633       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0816 12:33:34.278870       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="9.288µs"
	I0816 12:33:44.400197       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	W0816 12:33:49.234266       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 12:33:49.234311       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0816 12:33:56.577146       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-606349"
	W0816 12:34:00.392302       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 12:34:00.392464       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0816 12:34:10.225244       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 12:34:10.225295       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0816 12:34:19.324789       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-8988944d9" duration="6.326µs"
	
	
	==> kube-proxy [c0d8bb8efc5a6717ce3e0b4ffce376f1bfe1b809957f70faf484d2798e7fa960] <==
	I0816 12:26:27.416758       1 server_linux.go:66] "Using iptables proxy"
	I0816 12:26:28.411848       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0816 12:26:28.423262       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 12:26:28.595993       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0816 12:26:28.596061       1 server_linux.go:169] "Using iptables Proxier"
	I0816 12:26:28.598137       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 12:26:28.598633       1 server.go:483] "Version info" version="v1.31.0"
	I0816 12:26:28.598657       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 12:26:28.606140       1 config.go:197] "Starting service config controller"
	I0816 12:26:28.606174       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 12:26:28.606192       1 config.go:104] "Starting endpoint slice config controller"
	I0816 12:26:28.606196       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 12:26:28.606586       1 config.go:326] "Starting node config controller"
	I0816 12:26:28.606604       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 12:26:28.711699       1 shared_informer.go:320] Caches are synced for service config
	I0816 12:26:28.711763       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0816 12:26:28.712111       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5b36378235e83568da22d79bab9fd542bd2d8b811c3ac53446fdd2c0f8439b0a] <==
	W0816 12:26:14.457730       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 12:26:14.457796       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 12:26:14.457813       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 12:26:14.457906       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 12:26:14.457930       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 12:26:14.458001       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 12:26:14.457881       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 12:26:14.458109       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 12:26:14.457772       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 12:26:14.458211       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 12:26:14.457965       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 12:26:14.458309       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0816 12:26:14.457393       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 12:26:14.458394       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 12:26:14.459462       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 12:26:14.459573       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0816 12:26:15.283261       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 12:26:15.283388       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 12:26:15.341510       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 12:26:15.341907       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 12:26:15.350332       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 12:26:15.350477       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 12:26:15.514850       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 12:26:15.514974       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0816 12:26:17.449609       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 16 12:33:27 addons-606349 kubelet[1498]: I0816 12:33:27.474018    1498 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="headlamp/headlamp-57fb76fcdb-8dczv" podStartSLOduration=2.217510359 podStartE2EDuration="5.473997853s" podCreationTimestamp="2024-08-16 12:33:22 +0000 UTC" firstStartedPulling="2024-08-16 12:33:23.111423094 +0000 UTC m=+426.217530988" lastFinishedPulling="2024-08-16 12:33:26.367910588 +0000 UTC m=+429.474018482" observedRunningTime="2024-08-16 12:33:27.461661004 +0000 UTC m=+430.567768906" watchObservedRunningTime="2024-08-16 12:33:27.473997853 +0000 UTC m=+430.580105747"
	Aug 16 12:33:34 addons-606349 kubelet[1498]: I0816 12:33:34.463502    1498 scope.go:117] "RemoveContainer" containerID="58e97fbe1770ecc2b0af3cc95e5e23a85cb41bedbb19241d42b7a9992b523bdf"
	Aug 16 12:33:34 addons-606349 kubelet[1498]: I0816 12:33:34.480205    1498 scope.go:117] "RemoveContainer" containerID="58e97fbe1770ecc2b0af3cc95e5e23a85cb41bedbb19241d42b7a9992b523bdf"
	Aug 16 12:33:34 addons-606349 kubelet[1498]: E0816 12:33:34.480651    1498 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58e97fbe1770ecc2b0af3cc95e5e23a85cb41bedbb19241d42b7a9992b523bdf\": container with ID starting with 58e97fbe1770ecc2b0af3cc95e5e23a85cb41bedbb19241d42b7a9992b523bdf not found: ID does not exist" containerID="58e97fbe1770ecc2b0af3cc95e5e23a85cb41bedbb19241d42b7a9992b523bdf"
	Aug 16 12:33:34 addons-606349 kubelet[1498]: I0816 12:33:34.480696    1498 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58e97fbe1770ecc2b0af3cc95e5e23a85cb41bedbb19241d42b7a9992b523bdf"} err="failed to get container status \"58e97fbe1770ecc2b0af3cc95e5e23a85cb41bedbb19241d42b7a9992b523bdf\": rpc error: code = NotFound desc = could not find container \"58e97fbe1770ecc2b0af3cc95e5e23a85cb41bedbb19241d42b7a9992b523bdf\": container with ID starting with 58e97fbe1770ecc2b0af3cc95e5e23a85cb41bedbb19241d42b7a9992b523bdf not found: ID does not exist"
	Aug 16 12:33:34 addons-606349 kubelet[1498]: I0816 12:33:34.499306    1498 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22rnp\" (UniqueName: \"kubernetes.io/projected/5cf682b2-d9a3-465d-9945-29f18354cb72-kube-api-access-22rnp\") pod \"5cf682b2-d9a3-465d-9945-29f18354cb72\" (UID: \"5cf682b2-d9a3-465d-9945-29f18354cb72\") "
	Aug 16 12:33:34 addons-606349 kubelet[1498]: I0816 12:33:34.503887    1498 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cf682b2-d9a3-465d-9945-29f18354cb72-kube-api-access-22rnp" (OuterVolumeSpecName: "kube-api-access-22rnp") pod "5cf682b2-d9a3-465d-9945-29f18354cb72" (UID: "5cf682b2-d9a3-465d-9945-29f18354cb72"). InnerVolumeSpecName "kube-api-access-22rnp". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 16 12:33:34 addons-606349 kubelet[1498]: I0816 12:33:34.600150    1498 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-22rnp\" (UniqueName: \"kubernetes.io/projected/5cf682b2-d9a3-465d-9945-29f18354cb72-kube-api-access-22rnp\") on node \"addons-606349\" DevicePath \"\""
	Aug 16 12:33:35 addons-606349 kubelet[1498]: I0816 12:33:35.042896    1498 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cf682b2-d9a3-465d-9945-29f18354cb72" path="/var/lib/kubelet/pods/5cf682b2-d9a3-465d-9945-29f18354cb72/volumes"
	Aug 16 12:33:37 addons-606349 kubelet[1498]: E0816 12:33:37.370017    1498 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723811617369709034,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:594328,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:33:37 addons-606349 kubelet[1498]: E0816 12:33:37.370056    1498 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723811617369709034,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:594328,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:33:47 addons-606349 kubelet[1498]: E0816 12:33:47.372528    1498 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723811627372250235,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:594328,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:33:47 addons-606349 kubelet[1498]: E0816 12:33:47.372569    1498 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723811627372250235,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:594328,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:33:57 addons-606349 kubelet[1498]: E0816 12:33:57.375407    1498 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723811637375160649,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:594328,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:33:57 addons-606349 kubelet[1498]: E0816 12:33:57.375446    1498 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723811637375160649,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:594328,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:34:07 addons-606349 kubelet[1498]: E0816 12:34:07.378044    1498 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723811647377767404,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:594328,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:34:07 addons-606349 kubelet[1498]: E0816 12:34:07.378084    1498 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723811647377767404,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:594328,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:34:17 addons-606349 kubelet[1498]: E0816 12:34:17.380389    1498 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723811657380115921,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:594328,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:34:17 addons-606349 kubelet[1498]: E0816 12:34:17.380899    1498 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723811657380115921,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:594328,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 12:34:20 addons-606349 kubelet[1498]: I0816 12:34:20.617694    1498 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/93c15fce-49db-484e-817d-4f2f088bd4e5-tmp-dir\") pod \"93c15fce-49db-484e-817d-4f2f088bd4e5\" (UID: \"93c15fce-49db-484e-817d-4f2f088bd4e5\") "
	Aug 16 12:34:20 addons-606349 kubelet[1498]: I0816 12:34:20.617766    1498 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clxvt\" (UniqueName: \"kubernetes.io/projected/93c15fce-49db-484e-817d-4f2f088bd4e5-kube-api-access-clxvt\") pod \"93c15fce-49db-484e-817d-4f2f088bd4e5\" (UID: \"93c15fce-49db-484e-817d-4f2f088bd4e5\") "
	Aug 16 12:34:20 addons-606349 kubelet[1498]: I0816 12:34:20.618141    1498 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93c15fce-49db-484e-817d-4f2f088bd4e5-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "93c15fce-49db-484e-817d-4f2f088bd4e5" (UID: "93c15fce-49db-484e-817d-4f2f088bd4e5"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Aug 16 12:34:20 addons-606349 kubelet[1498]: I0816 12:34:20.625159    1498 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93c15fce-49db-484e-817d-4f2f088bd4e5-kube-api-access-clxvt" (OuterVolumeSpecName: "kube-api-access-clxvt") pod "93c15fce-49db-484e-817d-4f2f088bd4e5" (UID: "93c15fce-49db-484e-817d-4f2f088bd4e5"). InnerVolumeSpecName "kube-api-access-clxvt". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 16 12:34:20 addons-606349 kubelet[1498]: I0816 12:34:20.718465    1498 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/93c15fce-49db-484e-817d-4f2f088bd4e5-tmp-dir\") on node \"addons-606349\" DevicePath \"\""
	Aug 16 12:34:20 addons-606349 kubelet[1498]: I0816 12:34:20.718498    1498 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-clxvt\" (UniqueName: \"kubernetes.io/projected/93c15fce-49db-484e-817d-4f2f088bd4e5-kube-api-access-clxvt\") on node \"addons-606349\" DevicePath \"\""
	
	
	==> storage-provisioner [21cba91f907bb4abdbf83f51e5c492db7ff92a47790e579450b57efe1e853126] <==
	I0816 12:27:09.210683       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0816 12:27:09.268808       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0816 12:27:09.268932       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0816 12:27:09.282317       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0816 12:27:09.282689       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-606349_487ba23a-6f7c-42db-9bd9-8e545be5ba0a!
	I0816 12:27:09.282380       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a29a483c-3c46-40f9-9b11-c30ec8e820c9", APIVersion:"v1", ResourceVersion:"891", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-606349_487ba23a-6f7c-42db-9bd9-8e545be5ba0a became leader
	I0816 12:27:09.412252       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-606349_487ba23a-6f7c-42db-9bd9-8e545be5ba0a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-606349 -n addons-606349
helpers_test.go:261: (dbg) Run:  kubectl --context addons-606349 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (304.47s)

                                                
                                    

Test pass (296/328)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 7.98
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.22
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.0/json-events 7.05
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.07
18 TestDownloadOnly/v1.31.0/DeleteAll 0.2
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.59
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 194.74
31 TestAddons/serial/GCPAuth/Namespaces 0.22
33 TestAddons/parallel/Registry 16.64
35 TestAddons/parallel/InspektorGadget 11.81
39 TestAddons/parallel/CSI 62.14
40 TestAddons/parallel/Headlamp 17.74
41 TestAddons/parallel/CloudSpanner 6.56
42 TestAddons/parallel/LocalPath 9.35
43 TestAddons/parallel/NvidiaDevicePlugin 6.55
44 TestAddons/parallel/Yakd 11.73
45 TestAddons/StoppedEnableDisable 12.21
46 TestCertOptions 40.47
47 TestCertExpiration 253.83
49 TestForceSystemdFlag 45.06
50 TestForceSystemdEnv 44.68
56 TestErrorSpam/setup 29.36
57 TestErrorSpam/start 0.75
58 TestErrorSpam/status 1.14
59 TestErrorSpam/pause 1.82
60 TestErrorSpam/unpause 1.85
61 TestErrorSpam/stop 1.43
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 52.2
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 28.81
68 TestFunctional/serial/KubeContext 0.06
69 TestFunctional/serial/KubectlGetPods 0.1
72 TestFunctional/serial/CacheCmd/cache/add_remote 4.41
73 TestFunctional/serial/CacheCmd/cache/add_local 1.34
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
75 TestFunctional/serial/CacheCmd/cache/list 0.05
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
77 TestFunctional/serial/CacheCmd/cache/cache_reload 2.08
78 TestFunctional/serial/CacheCmd/cache/delete 0.1
79 TestFunctional/serial/MinikubeKubectlCmd 0.13
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
81 TestFunctional/serial/ExtraConfig 37.17
82 TestFunctional/serial/ComponentHealth 0.11
83 TestFunctional/serial/LogsCmd 1.68
84 TestFunctional/serial/LogsFileCmd 1.73
85 TestFunctional/serial/InvalidService 4.17
87 TestFunctional/parallel/ConfigCmd 0.49
88 TestFunctional/parallel/DashboardCmd 9.76
89 TestFunctional/parallel/DryRun 0.46
90 TestFunctional/parallel/InternationalLanguage 0.22
91 TestFunctional/parallel/StatusCmd 1.34
95 TestFunctional/parallel/ServiceCmdConnect 12.71
96 TestFunctional/parallel/AddonsCmd 0.26
97 TestFunctional/parallel/PersistentVolumeClaim 27.71
99 TestFunctional/parallel/SSHCmd 0.7
100 TestFunctional/parallel/CpCmd 2.23
102 TestFunctional/parallel/FileSync 0.38
103 TestFunctional/parallel/CertSync 2.16
107 TestFunctional/parallel/NodeLabels 0.1
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.62
111 TestFunctional/parallel/License 0.28
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.77
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.41
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.17
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
123 TestFunctional/parallel/ServiceCmd/DeployApp 6.24
124 TestFunctional/parallel/ServiceCmd/List 0.62
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.51
126 TestFunctional/parallel/ProfileCmd/profile_list 0.51
127 TestFunctional/parallel/ServiceCmd/JSONOutput 0.57
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.49
129 TestFunctional/parallel/ServiceCmd/HTTPS 0.54
130 TestFunctional/parallel/MountCmd/any-port 7.42
131 TestFunctional/parallel/ServiceCmd/Format 0.55
132 TestFunctional/parallel/ServiceCmd/URL 0.42
133 TestFunctional/parallel/MountCmd/specific-port 2.57
134 TestFunctional/parallel/MountCmd/VerifyCleanup 2.66
135 TestFunctional/parallel/Version/short 0.09
136 TestFunctional/parallel/Version/components 1.21
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.29
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.29
141 TestFunctional/parallel/ImageCommands/ImageBuild 2.72
142 TestFunctional/parallel/ImageCommands/Setup 0.76
143 TestFunctional/parallel/UpdateContextCmd/no_changes 0.18
144 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.19
145 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.24
146 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.98
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.11
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.37
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.52
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.55
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.8
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.57
153 TestFunctional/delete_echo-server_images 0.03
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 175.49
160 TestMultiControlPlane/serial/DeployApp 6.76
161 TestMultiControlPlane/serial/PingHostFromPods 1.67
162 TestMultiControlPlane/serial/AddWorkerNode 35.18
163 TestMultiControlPlane/serial/NodeLabels 0.13
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.76
165 TestMultiControlPlane/serial/CopyFile 18.85
166 TestMultiControlPlane/serial/StopSecondaryNode 12.8
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.64
168 TestMultiControlPlane/serial/RestartSecondaryNode 26.27
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 5.18
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 192.45
171 TestMultiControlPlane/serial/DeleteSecondaryNode 12.57
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.57
173 TestMultiControlPlane/serial/StopCluster 35.72
174 TestMultiControlPlane/serial/RestartCluster 98.78
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.56
176 TestMultiControlPlane/serial/AddSecondaryNode 76.25
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.78
181 TestJSONOutput/start/Command 52.45
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.82
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.65
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 5.89
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.22
206 TestKicCustomNetwork/create_custom_network 39.91
207 TestKicCustomNetwork/use_default_bridge_network 35.61
208 TestKicExistingNetwork 34.21
209 TestKicCustomSubnet 33.67
210 TestKicStaticIP 33.78
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 72.22
215 TestMountStart/serial/StartWithMountFirst 6.97
216 TestMountStart/serial/VerifyMountFirst 0.25
217 TestMountStart/serial/StartWithMountSecond 7.17
218 TestMountStart/serial/VerifyMountSecond 0.26
219 TestMountStart/serial/DeleteFirst 1.64
220 TestMountStart/serial/VerifyMountPostDelete 0.26
221 TestMountStart/serial/Stop 1.2
222 TestMountStart/serial/RestartStopped 8.45
223 TestMountStart/serial/VerifyMountPostStop 0.27
226 TestMultiNode/serial/FreshStart2Nodes 79.15
227 TestMultiNode/serial/DeployApp2Nodes 5.75
228 TestMultiNode/serial/PingHostFrom2Pods 0.94
229 TestMultiNode/serial/AddNode 29.02
230 TestMultiNode/serial/MultiNodeLabels 0.1
231 TestMultiNode/serial/ProfileList 0.33
232 TestMultiNode/serial/CopyFile 10.16
233 TestMultiNode/serial/StopNode 2.23
234 TestMultiNode/serial/StartAfterStop 10.35
235 TestMultiNode/serial/RestartKeepsNodes 81.04
236 TestMultiNode/serial/DeleteNode 5.34
237 TestMultiNode/serial/StopMultiNode 25.07
238 TestMultiNode/serial/RestartMultiNode 50.71
239 TestMultiNode/serial/ValidateNameConflict 31.27
244 TestPreload 123.68
246 TestScheduledStopUnix 108.17
249 TestInsufficientStorage 10.21
250 TestRunningBinaryUpgrade 75.88
252 TestKubernetesUpgrade 390.77
253 TestMissingContainerUpgrade 148.98
255 TestPause/serial/Start 56.7
256 TestPause/serial/SecondStartNoReconfiguration 22.94
257 TestPause/serial/Pause 0.72
258 TestPause/serial/VerifyStatus 0.33
259 TestPause/serial/Unpause 0.65
260 TestPause/serial/PauseAgain 0.89
261 TestPause/serial/DeletePaused 2.63
262 TestPause/serial/VerifyDeletedResources 0.15
264 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
265 TestNoKubernetes/serial/StartWithK8s 33.4
266 TestNoKubernetes/serial/StartWithStopK8s 12.83
267 TestNoKubernetes/serial/Start 10.34
268 TestNoKubernetes/serial/VerifyK8sNotRunning 0.49
269 TestNoKubernetes/serial/ProfileList 2.42
270 TestNoKubernetes/serial/Stop 1.33
271 TestNoKubernetes/serial/StartNoArgs 7.46
279 TestNetworkPlugins/group/false 4.7
280 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.3
284 TestStoppedBinaryUpgrade/Setup 0.79
285 TestStoppedBinaryUpgrade/Upgrade 111.79
286 TestStoppedBinaryUpgrade/MinikubeLogs 1.18
294 TestNetworkPlugins/group/auto/Start 53.67
295 TestNetworkPlugins/group/auto/KubeletFlags 0.29
296 TestNetworkPlugins/group/auto/NetCatPod 10.34
297 TestNetworkPlugins/group/auto/DNS 0.22
298 TestNetworkPlugins/group/auto/Localhost 0.16
299 TestNetworkPlugins/group/auto/HairPin 0.16
300 TestNetworkPlugins/group/kindnet/Start 52.68
301 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
302 TestNetworkPlugins/group/kindnet/KubeletFlags 0.28
303 TestNetworkPlugins/group/kindnet/NetCatPod 12.24
304 TestNetworkPlugins/group/kindnet/DNS 0.2
305 TestNetworkPlugins/group/kindnet/Localhost 0.19
306 TestNetworkPlugins/group/kindnet/HairPin 0.21
307 TestNetworkPlugins/group/calico/Start 69.37
308 TestNetworkPlugins/group/custom-flannel/Start 62.98
309 TestNetworkPlugins/group/calico/ControllerPod 6.02
310 TestNetworkPlugins/group/calico/KubeletFlags 0.49
311 TestNetworkPlugins/group/calico/NetCatPod 12.38
312 TestNetworkPlugins/group/calico/DNS 0.27
313 TestNetworkPlugins/group/calico/Localhost 0.17
314 TestNetworkPlugins/group/calico/HairPin 0.21
315 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.41
316 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.43
317 TestNetworkPlugins/group/enable-default-cni/Start 82.56
318 TestNetworkPlugins/group/custom-flannel/DNS 0.45
319 TestNetworkPlugins/group/custom-flannel/Localhost 0.34
320 TestNetworkPlugins/group/custom-flannel/HairPin 0.32
321 TestNetworkPlugins/group/flannel/Start 61.28
322 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
323 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.28
324 TestNetworkPlugins/group/flannel/ControllerPod 6.01
325 TestNetworkPlugins/group/enable-default-cni/DNS 0.21
326 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
327 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
328 TestNetworkPlugins/group/flannel/KubeletFlags 0.45
329 TestNetworkPlugins/group/flannel/NetCatPod 13.39
330 TestNetworkPlugins/group/flannel/DNS 0.29
331 TestNetworkPlugins/group/flannel/Localhost 0.23
332 TestNetworkPlugins/group/flannel/HairPin 0.2
333 TestNetworkPlugins/group/bridge/Start 73.84
335 TestStartStop/group/old-k8s-version/serial/FirstStart 190.95
336 TestNetworkPlugins/group/bridge/KubeletFlags 0.42
337 TestNetworkPlugins/group/bridge/NetCatPod 14.43
338 TestNetworkPlugins/group/bridge/DNS 0.19
339 TestNetworkPlugins/group/bridge/Localhost 0.16
340 TestNetworkPlugins/group/bridge/HairPin 0.16
342 TestStartStop/group/no-preload/serial/FirstStart 60.9
343 TestStartStop/group/no-preload/serial/DeployApp 8.41
344 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.15
345 TestStartStop/group/no-preload/serial/Stop 11.97
346 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
347 TestStartStop/group/no-preload/serial/SecondStart 290.95
348 TestStartStop/group/old-k8s-version/serial/DeployApp 9.85
349 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.5
350 TestStartStop/group/old-k8s-version/serial/Stop 12.17
351 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
352 TestStartStop/group/old-k8s-version/serial/SecondStart 148.08
353 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
354 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.39
355 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.29
356 TestStartStop/group/old-k8s-version/serial/Pause 3.03
358 TestStartStop/group/embed-certs/serial/FirstStart 51.31
359 TestStartStop/group/embed-certs/serial/DeployApp 9.36
360 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.08
361 TestStartStop/group/embed-certs/serial/Stop 11.96
362 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
363 TestStartStop/group/embed-certs/serial/SecondStart 303.24
364 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
365 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.13
366 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.3
367 TestStartStop/group/no-preload/serial/Pause 3.6
369 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 55.46
370 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.35
371 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.17
372 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.02
373 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
374 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 291.88
375 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.03
376 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.12
377 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.27
378 TestStartStop/group/embed-certs/serial/Pause 3.19
380 TestStartStop/group/newest-cni/serial/FirstStart 38.58
381 TestStartStop/group/newest-cni/serial/DeployApp 0
382 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.87
383 TestStartStop/group/newest-cni/serial/Stop 1.29
384 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
385 TestStartStop/group/newest-cni/serial/SecondStart 18.29
386 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
387 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
388 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.29
389 TestStartStop/group/newest-cni/serial/Pause 3.57
390 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
391 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
392 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
393 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.05
x
+
TestDownloadOnly/v1.20.0/json-events (7.98s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-476882 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-476882 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (7.981821342s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (7.98s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-476882
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-476882: exit status 85 (70.933626ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-476882 | jenkins | v1.33.1 | 16 Aug 24 12:25 UTC |          |
	|         | -p download-only-476882        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 12:25:08
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 12:25:08.840105 1386712 out.go:345] Setting OutFile to fd 1 ...
	I0816 12:25:08.840320 1386712 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 12:25:08.840349 1386712 out.go:358] Setting ErrFile to fd 2...
	I0816 12:25:08.840368 1386712 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 12:25:08.840668 1386712 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-1381335/.minikube/bin
	W0816 12:25:08.840855 1386712 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19423-1381335/.minikube/config/config.json: open /home/jenkins/minikube-integration/19423-1381335/.minikube/config/config.json: no such file or directory
	I0816 12:25:08.841334 1386712 out.go:352] Setting JSON to true
	I0816 12:25:08.842287 1386712 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":36452,"bootTime":1723774657,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0816 12:25:08.842388 1386712 start.go:139] virtualization:  
	I0816 12:25:08.845503 1386712 out.go:97] [download-only-476882] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	W0816 12:25:08.845678 1386712 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19423-1381335/.minikube/cache/preloaded-tarball: no such file or directory
	I0816 12:25:08.845785 1386712 notify.go:220] Checking for updates...
	I0816 12:25:08.847680 1386712 out.go:169] MINIKUBE_LOCATION=19423
	I0816 12:25:08.849728 1386712 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 12:25:08.851502 1386712 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19423-1381335/kubeconfig
	I0816 12:25:08.853531 1386712 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-1381335/.minikube
	I0816 12:25:08.855438 1386712 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0816 12:25:08.860489 1386712 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0816 12:25:08.860807 1386712 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 12:25:08.887426 1386712 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0816 12:25:08.887534 1386712 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0816 12:25:08.951836 1386712 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-16 12:25:08.942211091 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0816 12:25:08.951959 1386712 docker.go:307] overlay module found
	I0816 12:25:08.954123 1386712 out.go:97] Using the docker driver based on user configuration
	I0816 12:25:08.954162 1386712 start.go:297] selected driver: docker
	I0816 12:25:08.954170 1386712 start.go:901] validating driver "docker" against <nil>
	I0816 12:25:08.954294 1386712 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0816 12:25:09.006581 1386712 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-16 12:25:08.994242218 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0816 12:25:09.006828 1386712 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 12:25:09.007123 1386712 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0816 12:25:09.007313 1386712 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0816 12:25:09.009833 1386712 out.go:169] Using Docker driver with root privileges
	I0816 12:25:09.012728 1386712 cni.go:84] Creating CNI manager for ""
	I0816 12:25:09.012764 1386712 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0816 12:25:09.012777 1386712 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0816 12:25:09.012887 1386712 start.go:340] cluster config:
	{Name:download-only-476882 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-476882 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 12:25:09.015292 1386712 out.go:97] Starting "download-only-476882" primary control-plane node in "download-only-476882" cluster
	I0816 12:25:09.015341 1386712 cache.go:121] Beginning downloading kic base image for docker with crio
	I0816 12:25:09.017840 1386712 out.go:97] Pulling base image v0.0.44-1723650208-19443 ...
	I0816 12:25:09.017884 1386712 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0816 12:25:09.018002 1386712 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local docker daemon
	I0816 12:25:09.034669 1386712 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 to local cache
	I0816 12:25:09.034878 1386712 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local cache directory
	I0816 12:25:09.034983 1386712 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 to local cache
	I0816 12:25:09.073594 1386712 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0816 12:25:09.073618 1386712 cache.go:56] Caching tarball of preloaded images
	I0816 12:25:09.073806 1386712 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0816 12:25:09.076279 1386712 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0816 12:25:09.076316 1386712 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0816 12:25:09.158632 1386712 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:59cd2ef07b53f039bfd1761b921f2a02 -> /home/jenkins/minikube-integration/19423-1381335/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-476882 host does not exist
	  To start a cluster, run: "minikube start -p download-only-476882"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-476882
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (7.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-639766 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-639766 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (7.04590993s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (7.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-639766
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-639766: exit status 85 (70.331612ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-476882 | jenkins | v1.33.1 | 16 Aug 24 12:25 UTC |                     |
	|         | -p download-only-476882        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 16 Aug 24 12:25 UTC | 16 Aug 24 12:25 UTC |
	| delete  | -p download-only-476882        | download-only-476882 | jenkins | v1.33.1 | 16 Aug 24 12:25 UTC | 16 Aug 24 12:25 UTC |
	| start   | -o=json --download-only        | download-only-639766 | jenkins | v1.33.1 | 16 Aug 24 12:25 UTC |                     |
	|         | -p download-only-639766        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 12:25:17
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 12:25:17.247134 1386919 out.go:345] Setting OutFile to fd 1 ...
	I0816 12:25:17.247328 1386919 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 12:25:17.247359 1386919 out.go:358] Setting ErrFile to fd 2...
	I0816 12:25:17.247384 1386919 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 12:25:17.247640 1386919 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-1381335/.minikube/bin
	I0816 12:25:17.248078 1386919 out.go:352] Setting JSON to true
	I0816 12:25:17.249022 1386919 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":36461,"bootTime":1723774657,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0816 12:25:17.249121 1386919 start.go:139] virtualization:  
	I0816 12:25:17.251679 1386919 out.go:97] [download-only-639766] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0816 12:25:17.251935 1386919 notify.go:220] Checking for updates...
	I0816 12:25:17.254279 1386919 out.go:169] MINIKUBE_LOCATION=19423
	I0816 12:25:17.256505 1386919 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 12:25:17.258847 1386919 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19423-1381335/kubeconfig
	I0816 12:25:17.261218 1386919 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-1381335/.minikube
	I0816 12:25:17.263223 1386919 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0816 12:25:17.267937 1386919 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0816 12:25:17.268234 1386919 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 12:25:17.291235 1386919 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0816 12:25:17.291336 1386919 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0816 12:25:17.349205 1386919 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-16 12:25:17.339927796 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0816 12:25:17.349321 1386919 docker.go:307] overlay module found
	I0816 12:25:17.351609 1386919 out.go:97] Using the docker driver based on user configuration
	I0816 12:25:17.351643 1386919 start.go:297] selected driver: docker
	I0816 12:25:17.351650 1386919 start.go:901] validating driver "docker" against <nil>
	I0816 12:25:17.351761 1386919 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0816 12:25:17.404194 1386919 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-16 12:25:17.395302232 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0816 12:25:17.404366 1386919 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 12:25:17.404688 1386919 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0816 12:25:17.404845 1386919 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0816 12:25:17.407036 1386919 out.go:169] Using Docker driver with root privileges
	I0816 12:25:17.408736 1386919 cni.go:84] Creating CNI manager for ""
	I0816 12:25:17.408760 1386919 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0816 12:25:17.408772 1386919 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0816 12:25:17.408865 1386919 start.go:340] cluster config:
	{Name:download-only-639766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-639766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 12:25:17.410883 1386919 out.go:97] Starting "download-only-639766" primary control-plane node in "download-only-639766" cluster
	I0816 12:25:17.410904 1386919 cache.go:121] Beginning downloading kic base image for docker with crio
	I0816 12:25:17.413005 1386919 out.go:97] Pulling base image v0.0.44-1723650208-19443 ...
	I0816 12:25:17.413030 1386919 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 12:25:17.413205 1386919 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local docker daemon
	I0816 12:25:17.428408 1386919 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 to local cache
	I0816 12:25:17.428542 1386919 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local cache directory
	I0816 12:25:17.428576 1386919 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local cache directory, skipping pull
	I0816 12:25:17.428582 1386919 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 exists in cache, skipping pull
	I0816 12:25:17.428590 1386919 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 as a tarball
	I0816 12:25:17.471147 1386919 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4
	I0816 12:25:17.471171 1386919 cache.go:56] Caching tarball of preloaded images
	I0816 12:25:17.471803 1386919 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 12:25:17.473990 1386919 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0816 12:25:17.474011 1386919 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4 ...
	I0816 12:25:17.563116 1386919 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e6af375765e1700a37be5f07489fb80e -> /home/jenkins/minikube-integration/19423-1381335/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-639766 host does not exist
	  To start a cluster, run: "minikube start -p download-only-639766"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-639766
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-169699 --alsologtostderr --binary-mirror http://127.0.0.1:34739 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-169699" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-169699
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-606349
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-606349: exit status 85 (67.22552ms)

                                                
                                                
-- stdout --
	* Profile "addons-606349" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-606349"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-606349
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-606349: exit status 85 (69.419849ms)

                                                
                                                
-- stdout --
	* Profile "addons-606349" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-606349"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (194.74s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-606349 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-606349 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (3m14.737112595s)
--- PASS: TestAddons/Setup (194.74s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.22s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-606349 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-606349 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.22s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.64s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 8.739903ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-pbm8s" [73faa728-22c2-4a32-a43d-85763f935998] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004228749s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-xqwvx" [a9e788b9-88d0-492b-8001-c0da62bb7adc] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005808867s
addons_test.go:342: (dbg) Run:  kubectl --context addons-606349 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-606349 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-606349 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.210452744s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-606349 ip
2024/08/16 12:29:16 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-606349 addons disable registry --alsologtostderr -v=1
addons_test.go:390: (dbg) Done: out/minikube-linux-arm64 -p addons-606349 addons disable registry --alsologtostderr -v=1: (1.020784712s)
--- PASS: TestAddons/parallel/Registry (16.64s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.81s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-rxdf2" [f6368dc2-5871-4199-b0c5-18cfebd8efd1] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004474797s
addons_test.go:851: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-606349
addons_test.go:851: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-606349: (5.802857941s)
--- PASS: TestAddons/parallel/InspektorGadget (11.81s)

                                                
                                    
x
+
TestAddons/parallel/CSI (62.14s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 13.789038ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-606349 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606349 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606349 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606349 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606349 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606349 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606349 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606349 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606349 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606349 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606349 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606349 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-606349 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [db347812-ac3c-4fb3-9e63-dca5e9931e31] Pending
helpers_test.go:344: "task-pv-pod" [db347812-ac3c-4fb3-9e63-dca5e9931e31] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [db347812-ac3c-4fb3-9e63-dca5e9931e31] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.006093876s
addons_test.go:590: (dbg) Run:  kubectl --context addons-606349 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-606349 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-606349 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-606349 delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context addons-606349 delete pod task-pv-pod: (1.240235699s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-606349 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-606349 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606349 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606349 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606349 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606349 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606349 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606349 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606349 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606349 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606349 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606349 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606349 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606349 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606349 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606349 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606349 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606349 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606349 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606349 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606349 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-606349 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [714cffb5-efa9-41fb-aafe-ef20b975b0b9] Pending
helpers_test.go:344: "task-pv-pod-restore" [714cffb5-efa9-41fb-aafe-ef20b975b0b9] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [714cffb5-efa9-41fb-aafe-ef20b975b0b9] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003439232s
addons_test.go:632: (dbg) Run:  kubectl --context addons-606349 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-606349 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-606349 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-606349 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-arm64 -p addons-606349 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.785810794s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-arm64 -p addons-606349 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (62.14s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.74s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-606349 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-606349 --alsologtostderr -v=1: (1.025717874s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-8dczv" [5cf682b2-d9a3-465d-9945-29f18354cb72] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-8dczv" [5cf682b2-d9a3-465d-9945-29f18354cb72] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-8dczv" [5cf682b2-d9a3-465d-9945-29f18354cb72] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.004907136s
addons_test.go:839: (dbg) Run:  out/minikube-linux-arm64 -p addons-606349 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-arm64 -p addons-606349 addons disable headlamp --alsologtostderr -v=1: (5.710851757s)
--- PASS: TestAddons/parallel/Headlamp (17.74s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.56s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-c4bc9b5f8-rqdqb" [fabc49af-9c6f-44f6-8064-ab2e3b85d722] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003406045s
addons_test.go:870: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-606349
--- PASS: TestAddons/parallel/CloudSpanner (6.56s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.35s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-606349 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-606349 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606349 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606349 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606349 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606349 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606349 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606349 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [bf218cc2-df23-4b7a-a0ba-1a6f1f17bb96] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [bf218cc2-df23-4b7a-a0ba-1a6f1f17bb96] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [bf218cc2-df23-4b7a-a0ba-1a6f1f17bb96] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003541698s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-606349 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-arm64 -p addons-606349 ssh "cat /opt/local-path-provisioner/pvc-b19a7cb6-3608-45c2-ba67-caaddd2e79d9_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-606349 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-606349 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 -p addons-606349 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (9.35s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.55s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-tlscx" [50afed3c-442a-4c9e-b404-875b12dd96e9] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003913821s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-606349
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.55s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.73s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-h8f8w" [c842c280-fbd1-4619-b3e3-1222234c852c] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003864833s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-arm64 -p addons-606349 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-arm64 -p addons-606349 addons disable yakd --alsologtostderr -v=1: (5.727389288s)
--- PASS: TestAddons/parallel/Yakd (11.73s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.21s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-606349
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-606349: (11.942266198s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-606349
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-606349
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-606349
--- PASS: TestAddons/StoppedEnableDisable (12.21s)

                                                
                                    
x
+
TestCertOptions (40.47s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-473198 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-473198 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (37.688014885s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-473198 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-473198 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-473198 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-473198" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-473198
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-473198: (2.099349163s)
--- PASS: TestCertOptions (40.47s)

                                                
                                    
x
+
TestCertExpiration (253.83s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-443006 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-443006 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (39.824538232s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-443006 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-443006 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (31.400003917s)
helpers_test.go:175: Cleaning up "cert-expiration-443006" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-443006
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-443006: (2.608769424s)
--- PASS: TestCertExpiration (253.83s)

                                                
                                    
x
+
TestForceSystemdFlag (45.06s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-705543 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0816 13:08:41.445624 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-705543 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (41.734608803s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-705543 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-705543" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-705543
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-705543: (2.996333958s)
--- PASS: TestForceSystemdFlag (45.06s)

                                                
                                    
x
+
TestForceSystemdEnv (44.68s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-349936 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-349936 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (41.922969895s)
helpers_test.go:175: Cleaning up "force-systemd-env-349936" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-349936
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-349936: (2.757199848s)
--- PASS: TestForceSystemdEnv (44.68s)

                                                
                                    
x
+
TestErrorSpam/setup (29.36s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-971533 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-971533 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-971533 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-971533 --driver=docker  --container-runtime=crio: (29.364556373s)
--- PASS: TestErrorSpam/setup (29.36s)

                                                
                                    
x
+
TestErrorSpam/start (0.75s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-971533 --log_dir /tmp/nospam-971533 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-971533 --log_dir /tmp/nospam-971533 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-971533 --log_dir /tmp/nospam-971533 start --dry-run
--- PASS: TestErrorSpam/start (0.75s)

                                                
                                    
x
+
TestErrorSpam/status (1.14s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-971533 --log_dir /tmp/nospam-971533 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-971533 --log_dir /tmp/nospam-971533 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-971533 --log_dir /tmp/nospam-971533 status
--- PASS: TestErrorSpam/status (1.14s)

                                                
                                    
x
+
TestErrorSpam/pause (1.82s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-971533 --log_dir /tmp/nospam-971533 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-971533 --log_dir /tmp/nospam-971533 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-971533 --log_dir /tmp/nospam-971533 pause
--- PASS: TestErrorSpam/pause (1.82s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.85s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-971533 --log_dir /tmp/nospam-971533 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-971533 --log_dir /tmp/nospam-971533 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-971533 --log_dir /tmp/nospam-971533 unpause
--- PASS: TestErrorSpam/unpause (1.85s)

                                                
                                    
x
+
TestErrorSpam/stop (1.43s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-971533 --log_dir /tmp/nospam-971533 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-971533 --log_dir /tmp/nospam-971533 stop: (1.240467139s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-971533 --log_dir /tmp/nospam-971533 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-971533 --log_dir /tmp/nospam-971533 stop
--- PASS: TestErrorSpam/stop (1.43s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19423-1381335/.minikube/files/etc/test/nested/copy/1386707/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (52.2s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-890712 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-890712 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (52.196588103s)
--- PASS: TestFunctional/serial/StartWithProxy (52.20s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (28.81s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-890712 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-890712 --alsologtostderr -v=8: (28.809237341s)
functional_test.go:663: soft start took 28.811365051s for "functional-890712" cluster.
--- PASS: TestFunctional/serial/SoftStart (28.81s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-890712 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.41s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-890712 cache add registry.k8s.io/pause:3.1: (1.441471324s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-890712 cache add registry.k8s.io/pause:3.3: (1.464993805s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-890712 cache add registry.k8s.io/pause:latest: (1.50105397s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.41s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-890712 /tmp/TestFunctionalserialCacheCmdcacheadd_local412694408/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 cache add minikube-local-cache-test:functional-890712
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 cache delete minikube-local-cache-test:functional-890712
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-890712
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-890712 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (290.668543ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-890712 cache reload: (1.173299309s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 kubectl -- --context functional-890712 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-890712 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.17s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-890712 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-890712 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.168882568s)
functional_test.go:761: restart took 37.168993525s for "functional-890712" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (37.17s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-890712 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.68s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-890712 logs: (1.683523503s)
--- PASS: TestFunctional/serial/LogsCmd (1.68s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.73s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 logs --file /tmp/TestFunctionalserialLogsFileCmd1197904995/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-890712 logs --file /tmp/TestFunctionalserialLogsFileCmd1197904995/001/logs.txt: (1.725564483s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.73s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.17s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-890712 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-890712
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-890712: exit status 115 (558.928341ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30823 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-890712 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.17s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-890712 config get cpus: exit status 14 (93.243074ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-890712 config get cpus: exit status 14 (83.454469ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-890712 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-890712 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1413319: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.76s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-890712 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-890712 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (207.874991ms)

                                                
                                                
-- stdout --
	* [functional-890712] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-1381335/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-1381335/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 12:38:08.943595 1413025 out.go:345] Setting OutFile to fd 1 ...
	I0816 12:38:08.943765 1413025 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 12:38:08.943776 1413025 out.go:358] Setting ErrFile to fd 2...
	I0816 12:38:08.943781 1413025 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 12:38:08.944085 1413025 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-1381335/.minikube/bin
	I0816 12:38:08.944451 1413025 out.go:352] Setting JSON to false
	I0816 12:38:08.945374 1413025 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":37232,"bootTime":1723774657,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0816 12:38:08.945447 1413025 start.go:139] virtualization:  
	I0816 12:38:08.948627 1413025 out.go:177] * [functional-890712] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0816 12:38:08.952187 1413025 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 12:38:08.952290 1413025 notify.go:220] Checking for updates...
	I0816 12:38:08.958476 1413025 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 12:38:08.961305 1413025 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-1381335/kubeconfig
	I0816 12:38:08.964091 1413025 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-1381335/.minikube
	I0816 12:38:08.966832 1413025 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0816 12:38:08.969628 1413025 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 12:38:08.972763 1413025 config.go:182] Loaded profile config "functional-890712": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 12:38:08.973345 1413025 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 12:38:09.016015 1413025 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0816 12:38:09.016155 1413025 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0816 12:38:09.080356 1413025 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-16 12:38:09.067660381 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0816 12:38:09.080471 1413025 docker.go:307] overlay module found
	I0816 12:38:09.083354 1413025 out.go:177] * Using the docker driver based on existing profile
	I0816 12:38:09.086392 1413025 start.go:297] selected driver: docker
	I0816 12:38:09.086420 1413025 start.go:901] validating driver "docker" against &{Name:functional-890712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-890712 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 12:38:09.086521 1413025 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 12:38:09.089788 1413025 out.go:201] 
	W0816 12:38:09.092535 1413025 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0816 12:38:09.095178 1413025 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-890712 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-890712 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-890712 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (218.995277ms)

                                                
                                                
-- stdout --
	* [functional-890712] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-1381335/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-1381335/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 12:38:08.742595 1412972 out.go:345] Setting OutFile to fd 1 ...
	I0816 12:38:08.742852 1412972 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 12:38:08.742885 1412972 out.go:358] Setting ErrFile to fd 2...
	I0816 12:38:08.742910 1412972 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 12:38:08.743284 1412972 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-1381335/.minikube/bin
	I0816 12:38:08.743816 1412972 out.go:352] Setting JSON to false
	I0816 12:38:08.745104 1412972 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":37232,"bootTime":1723774657,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0816 12:38:08.745243 1412972 start.go:139] virtualization:  
	I0816 12:38:08.748409 1412972 out.go:177] * [functional-890712] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	I0816 12:38:08.752008 1412972 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 12:38:08.754009 1412972 notify.go:220] Checking for updates...
	I0816 12:38:08.757495 1412972 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 12:38:08.760077 1412972 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-1381335/kubeconfig
	I0816 12:38:08.762612 1412972 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-1381335/.minikube
	I0816 12:38:08.765175 1412972 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0816 12:38:08.767760 1412972 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 12:38:08.771098 1412972 config.go:182] Loaded profile config "functional-890712": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 12:38:08.771728 1412972 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 12:38:08.804633 1412972 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0816 12:38:08.804824 1412972 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0816 12:38:08.872434 1412972 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-16 12:38:08.862685065 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0816 12:38:08.872547 1412972 docker.go:307] overlay module found
	I0816 12:38:08.875394 1412972 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0816 12:38:08.878004 1412972 start.go:297] selected driver: docker
	I0816 12:38:08.878028 1412972 start.go:901] validating driver "docker" against &{Name:functional-890712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-890712 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 12:38:08.878155 1412972 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 12:38:08.881387 1412972 out.go:201] 
	W0816 12:38:08.883945 1412972 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0816 12:38:08.886586 1412972 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-890712 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-890712 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-25sk9" [6835180e-bd55-4cba-b0e7-8fbaa356f761] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-25sk9" [6835180e-bd55-4cba-b0e7-8fbaa356f761] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.003866233s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:32272
functional_test.go:1675: http://192.168.49.2:32272: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-25sk9

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32272
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.71s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [4368e944-f629-4ec3-b100-7578aba87b09] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.00904718s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-890712 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-890712 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-890712 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-890712 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [7f3fefcb-2960-48e2-a584-76a52c38aecf] Pending
helpers_test.go:344: "sp-pod" [7f3fefcb-2960-48e2-a584-76a52c38aecf] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [7f3fefcb-2960-48e2-a584-76a52c38aecf] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003330502s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-890712 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-890712 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-890712 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ca838ed6-ef58-432f-8d19-b215b5ea181e] Pending
helpers_test.go:344: "sp-pod" [ca838ed6-ef58-432f-8d19-b215b5ea181e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ca838ed6-ef58-432f-8d19-b215b5ea181e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003451533s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-890712 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.71s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 ssh -n functional-890712 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 cp functional-890712:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1091170777/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 ssh -n functional-890712 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 ssh -n functional-890712 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.23s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1386707/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 ssh "sudo cat /etc/test/nested/copy/1386707/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1386707.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 ssh "sudo cat /etc/ssl/certs/1386707.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1386707.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 ssh "sudo cat /usr/share/ca-certificates/1386707.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/13867072.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 ssh "sudo cat /etc/ssl/certs/13867072.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/13867072.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 ssh "sudo cat /usr/share/ca-certificates/13867072.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.16s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-890712 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
2024/08/16 12:38:18 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-890712 ssh "sudo systemctl is-active docker": exit status 1 (329.862952ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-890712 ssh "sudo systemctl is-active containerd": exit status 1 (293.572355ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-890712 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-890712 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-890712 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1410941: os: process already finished
helpers_test.go:508: unable to kill pid 1410743: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-890712 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-890712 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-890712 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [64272632-59a5-4a8d-a8f0-353f62b087f8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [64272632-59a5-4a8d-a8f0-353f62b087f8] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.004790689s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.41s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-890712 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.104.23.0 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-890712 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-890712 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-890712 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-zdk7n" [a8086d51-17e2-4f66-8bad-7f251fdbf442] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-zdk7n" [a8086d51-17e2-4f66-8bad-7f251fdbf442] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.007364701s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "452.308798ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "53.46225ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 service list -o json
functional_test.go:1494: Took "569.569804ms" to run "out/minikube-linux-arm64 -p functional-890712 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "412.527394ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "76.464423ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:30399
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-890712 /tmp/TestFunctionalparallelMountCmdany-port3850316762/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1723811886214658969" to /tmp/TestFunctionalparallelMountCmdany-port3850316762/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1723811886214658969" to /tmp/TestFunctionalparallelMountCmdany-port3850316762/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1723811886214658969" to /tmp/TestFunctionalparallelMountCmdany-port3850316762/001/test-1723811886214658969
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-890712 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (490.7445ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 16 12:38 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 16 12:38 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 16 12:38 test-1723811886214658969
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 ssh cat /mount-9p/test-1723811886214658969
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-890712 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [9e69e36a-f4b9-4b3a-b0f3-dd70f45a9875] Pending
helpers_test.go:344: "busybox-mount" [9e69e36a-f4b9-4b3a-b0f3-dd70f45a9875] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [9e69e36a-f4b9-4b3a-b0f3-dd70f45a9875] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [9e69e36a-f4b9-4b3a-b0f3-dd70f45a9875] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004850182s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-890712 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-890712 /tmp/TestFunctionalparallelMountCmdany-port3850316762/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:30399
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-890712 /tmp/TestFunctionalparallelMountCmdspecific-port3498912599/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-890712 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (570.551014ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-890712 /tmp/TestFunctionalparallelMountCmdspecific-port3498912599/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-890712 ssh "sudo umount -f /mount-9p": exit status 1 (397.564582ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-890712 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-890712 /tmp/TestFunctionalparallelMountCmdspecific-port3498912599/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.57s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-890712 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3682387623/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-890712 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3682387623/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-890712 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3682387623/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-890712 ssh "findmnt -T" /mount1: exit status 1 (848.071493ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-890712 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-890712 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3682387623/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-890712 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3682387623/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-890712 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3682387623/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.66s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-890712 version -o=json --components: (1.21119203s)
--- PASS: TestFunctional/parallel/Version/components (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-890712 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-890712
localhost/kicbase/echo-server:functional-890712
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20240813-c6f155d6
docker.io/kindest/kindnetd:v20240730-75a5af0c
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-890712 image ls --format short --alsologtostderr:
I0816 12:38:26.982113 1415861 out.go:345] Setting OutFile to fd 1 ...
I0816 12:38:26.982358 1415861 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 12:38:26.982383 1415861 out.go:358] Setting ErrFile to fd 2...
I0816 12:38:26.982404 1415861 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 12:38:26.982688 1415861 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-1381335/.minikube/bin
I0816 12:38:26.983406 1415861 config.go:182] Loaded profile config "functional-890712": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0816 12:38:26.983561 1415861 config.go:182] Loaded profile config "functional-890712": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0816 12:38:26.984108 1415861 cli_runner.go:164] Run: docker container inspect functional-890712 --format={{.State.Status}}
I0816 12:38:27.004424 1415861 ssh_runner.go:195] Run: systemctl --version
I0816 12:38:27.004495 1415861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-890712
I0816 12:38:27.035500 1415861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34605 SSHKeyPath:/home/jenkins/minikube-integration/19423-1381335/.minikube/machines/functional-890712/id_rsa Username:docker}
I0816 12:38:27.131048 1415861 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-890712 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/library/nginx                 | latest             | a9dfdba8b7190 | 197MB  |
| localhost/kicbase/echo-server           | functional-890712  | ce2d2cda2d858 | 4.79MB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | 2437cf7621777 | 58.8MB |
| registry.k8s.io/kube-controller-manager | v1.31.0            | fcb0683e6bdbd | 86.9MB |
| docker.io/kindest/kindnetd              | v20240730-75a5af0c | d5e283bc63d43 | 90.3MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| docker.io/library/nginx                 | alpine             | 70594c812316a | 48.4MB |
| localhost/minikube-local-cache-test     | functional-890712  | 40df608e625c1 | 3.33kB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 27e3830e14027 | 140MB  |
| registry.k8s.io/kube-proxy              | v1.31.0            | 71d55d66fd4ee | 95.9MB |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 6a23fa8fd2b78 | 90.3MB |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/kube-apiserver          | v1.31.0            | cd0f0ae0ec9e0 | 92.6MB |
| registry.k8s.io/kube-scheduler          | v1.31.0            | fbbbd428abb4d | 67MB   |
| registry.k8s.io/pause                   | 3.10               | afb61768ce381 | 520kB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-890712 image ls --format table --alsologtostderr:
I0816 12:38:27.525700 1416018 out.go:345] Setting OutFile to fd 1 ...
I0816 12:38:27.525990 1416018 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 12:38:27.526012 1416018 out.go:358] Setting ErrFile to fd 2...
I0816 12:38:27.526020 1416018 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 12:38:27.526287 1416018 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-1381335/.minikube/bin
I0816 12:38:27.527079 1416018 config.go:182] Loaded profile config "functional-890712": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0816 12:38:27.527204 1416018 config.go:182] Loaded profile config "functional-890712": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0816 12:38:27.527697 1416018 cli_runner.go:164] Run: docker container inspect functional-890712 --format={{.State.Status}}
I0816 12:38:27.559853 1416018 ssh_runner.go:195] Run: systemctl --version
I0816 12:38:27.559922 1416018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-890712
I0816 12:38:27.584380 1416018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34605 SSHKeyPath:/home/jenkins/minikube-integration/19423-1381335/.minikube/machines/functional-890712/id_rsa Username:docker}
I0816 12:38:27.682777 1416018 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-890712 image ls --format json --alsologtostderr:
[{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c0399
4e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"d5e283bc63d431d0446af8b48a1618696def3b777347a97b8b3553d2c989c806","repoDigests":["docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3","docker.io/kindest/kindnetd@sha256:c26d1775b97b4ba3436f3cdc4d5c153b773ce2b3f5ad8e201f16b09e7182d63e"],"repoTags":["docker.io/kindest/kindnetd:v20240730-75a5af0c"],"size":"90290738"},{"id":"6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51","repoDigests":["docker.io/kindest/kindnetd@sha256:4d39335073da9a0b82be8e01028f0aa75aff16caff2e2d8889d0effd579a6f64","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"90295858"},{"id":"40df608e625c103a1907e24ca9df64ed31c81e3c1faafa30d51dae203261fad8","repoDigests":["localhost/minikube-local-cache-test@sha256:babaa1cbdcd3f5f3157998ea9395c9001aa30b6a6e08f784889c110f9e24c5c6"]
,"repoTags":["localhost/minikube-local-cache-test:functional-890712"],"size":"3330"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a","registry.k8s.io/etcd@sha256:e3ee3ca2dbaf511385000dbd54123629c71b6cfaabd469e658d76a116b7f43da"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139912446"},{"id":"71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89","repoDigests":["registry.k8s.io/kube-proxy@sha256:b7d336a1c5e9719bafe8a97dbb2c503580b5ac898f3f40329fc98f6a1f0ea971","registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"95949719"},{"id":"a9dfdba8b719078c5705fdecd6f8315765cc79e473111aa9451551ddc340b2bc","repoDigests":["docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add","docker.io/library/nginx@sha256:bab0713884fed8a13
7ba5bd2d67d218c6192bd79b5a3526d3eb15567e035eb55"],"repoTags":["docker.io/library/nginx:latest"],"size":"197172049"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:ba9e70dbdf0ff8a77ea63451bb1241d08819471730fe7a35a218a8db2ef7890c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"58812704"},{"id":"cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388","repoDigests":["registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf","registry.k8s.io/kube-apiserver@sha256:74a8050ec347821b7884ab635f3e7883b5c570388ed8087ffd01fd9fe1cb39c6"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"92567005"},{"id":"fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:ed8613b19e25d56d25e9ba0d8
3fd1e14f8ba070cb80e2674ba62ded55e260a9c","registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"86930758"},{"id":"fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb","repoDigests":["registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808","registry.k8s.io/kube-scheduler@sha256:dd427ccac78f027990d5a00936681095842a0d813c70ecc2d4f65f3bd3beef77"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"67007814"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"519877"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","
repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"70594c812316a9bc20dd5d679982c6322dc7cf0128687ae9f849d0207783e753","repoDigests":["docker.io/library/nginx@sha256:ba188f579f7a2638229e326e78c957a185630e303757813ef1ad7aac1b8248b6","docker.io/library/nginx@sha256:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158"],"repoTags":["docker.io/library/nginx:alpine"],"size":"48397013"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c
33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["localhost/kicbase/echo-server:functional-890712"],"size":"4788229"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-890712 image ls --format json --alsologtostderr:
I0816 12:38:27.272359 1415927 out.go:345] Setting OutFile to fd 1 ...
I0816 12:38:27.272590 1415927 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 12:38:27.272634 1415927 out.go:358] Setting ErrFile to fd 2...
I0816 12:38:27.272656 1415927 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 12:38:27.272944 1415927 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-1381335/.minikube/bin
I0816 12:38:27.273640 1415927 config.go:182] Loaded profile config "functional-890712": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0816 12:38:27.273837 1415927 config.go:182] Loaded profile config "functional-890712": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0816 12:38:27.274405 1415927 cli_runner.go:164] Run: docker container inspect functional-890712 --format={{.State.Status}}
I0816 12:38:27.299049 1415927 ssh_runner.go:195] Run: systemctl --version
I0816 12:38:27.299110 1415927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-890712
I0816 12:38:27.318445 1415927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34605 SSHKeyPath:/home/jenkins/minikube-integration/19423-1381335/.minikube/machines/functional-890712/id_rsa Username:docker}
I0816 12:38:27.418576 1415927 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-890712 image ls --format yaml --alsologtostderr:
- id: cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf
- registry.k8s.io/kube-apiserver@sha256:74a8050ec347821b7884ab635f3e7883b5c570388ed8087ffd01fd9fe1cb39c6
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "92567005"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 40df608e625c103a1907e24ca9df64ed31c81e3c1faafa30d51dae203261fad8
repoDigests:
- localhost/minikube-local-cache-test@sha256:babaa1cbdcd3f5f3157998ea9395c9001aa30b6a6e08f784889c110f9e24c5c6
repoTags:
- localhost/minikube-local-cache-test:functional-890712
size: "3330"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:ba9e70dbdf0ff8a77ea63451bb1241d08819471730fe7a35a218a8db2ef7890c
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "58812704"
- id: fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808
- registry.k8s.io/kube-scheduler@sha256:dd427ccac78f027990d5a00936681095842a0d813c70ecc2d4f65f3bd3beef77
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "67007814"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-890712
size: "4788229"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
- registry.k8s.io/etcd@sha256:e3ee3ca2dbaf511385000dbd54123629c71b6cfaabd469e658d76a116b7f43da
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139912446"
- id: 71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89
repoDigests:
- registry.k8s.io/kube-proxy@sha256:b7d336a1c5e9719bafe8a97dbb2c503580b5ac898f3f40329fc98f6a1f0ea971
- registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "95949719"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "519877"
- id: a9dfdba8b719078c5705fdecd6f8315765cc79e473111aa9451551ddc340b2bc
repoDigests:
- docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add
- docker.io/library/nginx@sha256:bab0713884fed8a137ba5bd2d67d218c6192bd79b5a3526d3eb15567e035eb55
repoTags:
- docker.io/library/nginx:latest
size: "197172049"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:ed8613b19e25d56d25e9ba0d83fd1e14f8ba070cb80e2674ba62ded55e260a9c
- registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "86930758"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: d5e283bc63d431d0446af8b48a1618696def3b777347a97b8b3553d2c989c806
repoDigests:
- docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3
- docker.io/kindest/kindnetd@sha256:c26d1775b97b4ba3436f3cdc4d5c153b773ce2b3f5ad8e201f16b09e7182d63e
repoTags:
- docker.io/kindest/kindnetd:v20240730-75a5af0c
size: "90290738"
- id: 6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:4d39335073da9a0b82be8e01028f0aa75aff16caff2e2d8889d0effd579a6f64
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "90295858"
- id: 70594c812316a9bc20dd5d679982c6322dc7cf0128687ae9f849d0207783e753
repoDigests:
- docker.io/library/nginx@sha256:ba188f579f7a2638229e326e78c957a185630e303757813ef1ad7aac1b8248b6
- docker.io/library/nginx@sha256:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158
repoTags:
- docker.io/library/nginx:alpine
size: "48397013"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-890712 image ls --format yaml --alsologtostderr:
I0816 12:38:26.969403 1415862 out.go:345] Setting OutFile to fd 1 ...
I0816 12:38:26.969578 1415862 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 12:38:26.969591 1415862 out.go:358] Setting ErrFile to fd 2...
I0816 12:38:26.969598 1415862 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 12:38:26.969907 1415862 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-1381335/.minikube/bin
I0816 12:38:26.970575 1415862 config.go:182] Loaded profile config "functional-890712": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0816 12:38:26.970696 1415862 config.go:182] Loaded profile config "functional-890712": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0816 12:38:26.971255 1415862 cli_runner.go:164] Run: docker container inspect functional-890712 --format={{.State.Status}}
I0816 12:38:26.998282 1415862 ssh_runner.go:195] Run: systemctl --version
I0816 12:38:26.998357 1415862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-890712
I0816 12:38:27.026810 1415862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34605 SSHKeyPath:/home/jenkins/minikube-integration/19423-1381335/.minikube/machines/functional-890712/id_rsa Username:docker}
I0816 12:38:27.122554 1415862 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-890712 ssh pgrep buildkitd: exit status 1 (324.72651ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 image build -t localhost/my-image:functional-890712 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-890712 image build -t localhost/my-image:functional-890712 testdata/build --alsologtostderr: (2.174351382s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-arm64 -p functional-890712 image build -t localhost/my-image:functional-890712 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 3a6a5dded85
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-890712
--> 6dc94b58ea4
Successfully tagged localhost/my-image:functional-890712
6dc94b58ea4626581225dfd413f49561798f73d0f64b03bb28fd0a61bb385759
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-890712 image build -t localhost/my-image:functional-890712 testdata/build --alsologtostderr:
I0816 12:38:27.600309 1416024 out.go:345] Setting OutFile to fd 1 ...
I0816 12:38:27.601269 1416024 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 12:38:27.601282 1416024 out.go:358] Setting ErrFile to fd 2...
I0816 12:38:27.601288 1416024 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 12:38:27.601568 1416024 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-1381335/.minikube/bin
I0816 12:38:27.602399 1416024 config.go:182] Loaded profile config "functional-890712": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0816 12:38:27.603041 1416024 config.go:182] Loaded profile config "functional-890712": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0816 12:38:27.603594 1416024 cli_runner.go:164] Run: docker container inspect functional-890712 --format={{.State.Status}}
I0816 12:38:27.629239 1416024 ssh_runner.go:195] Run: systemctl --version
I0816 12:38:27.629298 1416024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-890712
I0816 12:38:27.652541 1416024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34605 SSHKeyPath:/home/jenkins/minikube-integration/19423-1381335/.minikube/machines/functional-890712/id_rsa Username:docker}
I0816 12:38:27.754217 1416024 build_images.go:161] Building image from path: /tmp/build.933965377.tar
I0816 12:38:27.754287 1416024 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0816 12:38:27.763702 1416024 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.933965377.tar
I0816 12:38:27.767731 1416024 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.933965377.tar: stat -c "%s %y" /var/lib/minikube/build/build.933965377.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.933965377.tar': No such file or directory
I0816 12:38:27.767770 1416024 ssh_runner.go:362] scp /tmp/build.933965377.tar --> /var/lib/minikube/build/build.933965377.tar (3072 bytes)
I0816 12:38:27.792955 1416024 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.933965377
I0816 12:38:27.801627 1416024 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.933965377 -xf /var/lib/minikube/build/build.933965377.tar
I0816 12:38:27.810985 1416024 crio.go:315] Building image: /var/lib/minikube/build/build.933965377
I0816 12:38:27.811060 1416024 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-890712 /var/lib/minikube/build/build.933965377 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0816 12:38:29.667018 1416024 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-890712 /var/lib/minikube/build/build.933965377 --cgroup-manager=cgroupfs: (1.855926846s)
I0816 12:38:29.667086 1416024 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.933965377
I0816 12:38:29.676891 1416024 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.933965377.tar
I0816 12:38:29.685573 1416024 build_images.go:217] Built localhost/my-image:functional-890712 from /tmp/build.933965377.tar
I0816 12:38:29.685610 1416024 build_images.go:133] succeeded building to: functional-890712
I0816 12:38:29.685616 1416024 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-890712
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 image load --daemon kicbase/echo-server:functional-890712 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-890712 image load --daemon kicbase/echo-server:functional-890712 --alsologtostderr: (1.634351698s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 image load --daemon kicbase/echo-server:functional-890712 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-890712
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 image load --daemon kicbase/echo-server:functional-890712 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 image save kicbase/echo-server:functional-890712 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 image rm kicbase/echo-server:functional-890712 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-890712
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-890712 image save --daemon kicbase/echo-server:functional-890712 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-890712
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-890712
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-890712
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-890712
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (175.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-137803 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0816 12:38:41.446731 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/client.crt: no such file or directory" logger="UnhandledError"
E0816 12:38:41.454158 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/client.crt: no such file or directory" logger="UnhandledError"
E0816 12:38:41.465803 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/client.crt: no such file or directory" logger="UnhandledError"
E0816 12:38:41.487757 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/client.crt: no such file or directory" logger="UnhandledError"
E0816 12:38:41.529332 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/client.crt: no such file or directory" logger="UnhandledError"
E0816 12:38:41.610825 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/client.crt: no such file or directory" logger="UnhandledError"
E0816 12:38:41.772241 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/client.crt: no such file or directory" logger="UnhandledError"
E0816 12:38:42.093927 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/client.crt: no such file or directory" logger="UnhandledError"
E0816 12:38:42.736232 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/client.crt: no such file or directory" logger="UnhandledError"
E0816 12:38:44.017958 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/client.crt: no such file or directory" logger="UnhandledError"
E0816 12:38:46.580511 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/client.crt: no such file or directory" logger="UnhandledError"
E0816 12:38:51.701886 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/client.crt: no such file or directory" logger="UnhandledError"
E0816 12:39:01.944225 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/client.crt: no such file or directory" logger="UnhandledError"
E0816 12:39:22.426065 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/client.crt: no such file or directory" logger="UnhandledError"
E0816 12:40:03.387715 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/client.crt: no such file or directory" logger="UnhandledError"
E0816 12:41:25.309843 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-137803 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m54.677279336s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (175.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-137803 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-137803 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-137803 -- rollout status deployment/busybox: (4.029085652s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-137803 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-137803 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-137803 -- exec busybox-7dff88458-7wdrc -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-137803 -- exec busybox-7dff88458-d656m -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-137803 -- exec busybox-7dff88458-dgpwg -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-137803 -- exec busybox-7dff88458-7wdrc -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-137803 -- exec busybox-7dff88458-d656m -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-137803 -- exec busybox-7dff88458-dgpwg -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-137803 -- exec busybox-7dff88458-7wdrc -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-137803 -- exec busybox-7dff88458-d656m -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-137803 -- exec busybox-7dff88458-dgpwg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-137803 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-137803 -- exec busybox-7dff88458-7wdrc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-137803 -- exec busybox-7dff88458-7wdrc -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-137803 -- exec busybox-7dff88458-d656m -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-137803 -- exec busybox-7dff88458-d656m -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-137803 -- exec busybox-7dff88458-dgpwg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-137803 -- exec busybox-7dff88458-dgpwg -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (35.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-137803 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-137803 -v=7 --alsologtostderr: (34.177093838s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-137803 status -v=7 --alsologtostderr: (1.000641053s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (35.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-137803 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (18.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 cp testdata/cp-test.txt ha-137803:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 ssh -n ha-137803 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 cp ha-137803:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile390786287/001/cp-test_ha-137803.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 ssh -n ha-137803 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 cp ha-137803:/home/docker/cp-test.txt ha-137803-m02:/home/docker/cp-test_ha-137803_ha-137803-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 ssh -n ha-137803 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 ssh -n ha-137803-m02 "sudo cat /home/docker/cp-test_ha-137803_ha-137803-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 cp ha-137803:/home/docker/cp-test.txt ha-137803-m03:/home/docker/cp-test_ha-137803_ha-137803-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 ssh -n ha-137803 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 ssh -n ha-137803-m03 "sudo cat /home/docker/cp-test_ha-137803_ha-137803-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 cp ha-137803:/home/docker/cp-test.txt ha-137803-m04:/home/docker/cp-test_ha-137803_ha-137803-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 ssh -n ha-137803 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 ssh -n ha-137803-m04 "sudo cat /home/docker/cp-test_ha-137803_ha-137803-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 cp testdata/cp-test.txt ha-137803-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 ssh -n ha-137803-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 cp ha-137803-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile390786287/001/cp-test_ha-137803-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 ssh -n ha-137803-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 cp ha-137803-m02:/home/docker/cp-test.txt ha-137803:/home/docker/cp-test_ha-137803-m02_ha-137803.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 ssh -n ha-137803-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 ssh -n ha-137803 "sudo cat /home/docker/cp-test_ha-137803-m02_ha-137803.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 cp ha-137803-m02:/home/docker/cp-test.txt ha-137803-m03:/home/docker/cp-test_ha-137803-m02_ha-137803-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 ssh -n ha-137803-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 ssh -n ha-137803-m03 "sudo cat /home/docker/cp-test_ha-137803-m02_ha-137803-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 cp ha-137803-m02:/home/docker/cp-test.txt ha-137803-m04:/home/docker/cp-test_ha-137803-m02_ha-137803-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 ssh -n ha-137803-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 ssh -n ha-137803-m04 "sudo cat /home/docker/cp-test_ha-137803-m02_ha-137803-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 cp testdata/cp-test.txt ha-137803-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 ssh -n ha-137803-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 cp ha-137803-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile390786287/001/cp-test_ha-137803-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 ssh -n ha-137803-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 cp ha-137803-m03:/home/docker/cp-test.txt ha-137803:/home/docker/cp-test_ha-137803-m03_ha-137803.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 ssh -n ha-137803-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 ssh -n ha-137803 "sudo cat /home/docker/cp-test_ha-137803-m03_ha-137803.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 cp ha-137803-m03:/home/docker/cp-test.txt ha-137803-m02:/home/docker/cp-test_ha-137803-m03_ha-137803-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 ssh -n ha-137803-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 ssh -n ha-137803-m02 "sudo cat /home/docker/cp-test_ha-137803-m03_ha-137803-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 cp ha-137803-m03:/home/docker/cp-test.txt ha-137803-m04:/home/docker/cp-test_ha-137803-m03_ha-137803-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 ssh -n ha-137803-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 ssh -n ha-137803-m04 "sudo cat /home/docker/cp-test_ha-137803-m03_ha-137803-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 cp testdata/cp-test.txt ha-137803-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 ssh -n ha-137803-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 cp ha-137803-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile390786287/001/cp-test_ha-137803-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 ssh -n ha-137803-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 cp ha-137803-m04:/home/docker/cp-test.txt ha-137803:/home/docker/cp-test_ha-137803-m04_ha-137803.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 ssh -n ha-137803-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 ssh -n ha-137803 "sudo cat /home/docker/cp-test_ha-137803-m04_ha-137803.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 cp ha-137803-m04:/home/docker/cp-test.txt ha-137803-m02:/home/docker/cp-test_ha-137803-m04_ha-137803-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 ssh -n ha-137803-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 ssh -n ha-137803-m02 "sudo cat /home/docker/cp-test_ha-137803-m04_ha-137803-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 cp ha-137803-m04:/home/docker/cp-test.txt ha-137803-m03:/home/docker/cp-test_ha-137803-m04_ha-137803-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 ssh -n ha-137803-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 ssh -n ha-137803-m03 "sudo cat /home/docker/cp-test_ha-137803-m04_ha-137803-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (18.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 node stop m02 -v=7 --alsologtostderr
E0816 12:42:36.730834 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/functional-890712/client.crt: no such file or directory" logger="UnhandledError"
E0816 12:42:36.737318 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/functional-890712/client.crt: no such file or directory" logger="UnhandledError"
E0816 12:42:36.748684 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/functional-890712/client.crt: no such file or directory" logger="UnhandledError"
E0816 12:42:36.770097 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/functional-890712/client.crt: no such file or directory" logger="UnhandledError"
E0816 12:42:36.811837 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/functional-890712/client.crt: no such file or directory" logger="UnhandledError"
E0816 12:42:36.893222 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/functional-890712/client.crt: no such file or directory" logger="UnhandledError"
E0816 12:42:37.054778 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/functional-890712/client.crt: no such file or directory" logger="UnhandledError"
E0816 12:42:37.376465 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/functional-890712/client.crt: no such file or directory" logger="UnhandledError"
E0816 12:42:38.018209 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/functional-890712/client.crt: no such file or directory" logger="UnhandledError"
E0816 12:42:39.299654 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/functional-890712/client.crt: no such file or directory" logger="UnhandledError"
E0816 12:42:41.861218 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/functional-890712/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-137803 node stop m02 -v=7 --alsologtostderr: (12.010696798s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-137803 status -v=7 --alsologtostderr: exit status 7 (789.310927ms)

                                                
                                                
-- stdout --
	ha-137803
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-137803-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-137803-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-137803-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 12:42:43.760628 1431786 out.go:345] Setting OutFile to fd 1 ...
	I0816 12:42:43.760923 1431786 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 12:42:43.760953 1431786 out.go:358] Setting ErrFile to fd 2...
	I0816 12:42:43.760974 1431786 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 12:42:43.761255 1431786 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-1381335/.minikube/bin
	I0816 12:42:43.761484 1431786 out.go:352] Setting JSON to false
	I0816 12:42:43.761548 1431786 mustload.go:65] Loading cluster: ha-137803
	I0816 12:42:43.761592 1431786 notify.go:220] Checking for updates...
	I0816 12:42:43.762056 1431786 config.go:182] Loaded profile config "ha-137803": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 12:42:43.762093 1431786 status.go:255] checking status of ha-137803 ...
	I0816 12:42:43.762956 1431786 cli_runner.go:164] Run: docker container inspect ha-137803 --format={{.State.Status}}
	I0816 12:42:43.782191 1431786 status.go:330] ha-137803 host status = "Running" (err=<nil>)
	I0816 12:42:43.782215 1431786 host.go:66] Checking if "ha-137803" exists ...
	I0816 12:42:43.782530 1431786 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-137803
	I0816 12:42:43.811252 1431786 host.go:66] Checking if "ha-137803" exists ...
	I0816 12:42:43.811653 1431786 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 12:42:43.811709 1431786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-137803
	I0816 12:42:43.835919 1431786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34610 SSHKeyPath:/home/jenkins/minikube-integration/19423-1381335/.minikube/machines/ha-137803/id_rsa Username:docker}
	I0816 12:42:43.931179 1431786 ssh_runner.go:195] Run: systemctl --version
	I0816 12:42:43.936241 1431786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 12:42:43.951844 1431786 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0816 12:42:44.019298 1431786 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:71 SystemTime:2024-08-16 12:42:44.008024805 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0816 12:42:44.020056 1431786 kubeconfig.go:125] found "ha-137803" server: "https://192.168.49.254:8443"
	I0816 12:42:44.020115 1431786 api_server.go:166] Checking apiserver status ...
	I0816 12:42:44.020164 1431786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 12:42:44.033104 1431786 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1404/cgroup
	I0816 12:42:44.043861 1431786 api_server.go:182] apiserver freezer: "4:freezer:/docker/b7259f33e7b2200e530a6e0ced59e93c2666a9af96934e1fd38b6df4135a8c0d/crio/crio-db069dab43e9b156ff1343d48ab0b4029f443765d515554eeee0d612e55aa9d2"
	I0816 12:42:44.043944 1431786 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b7259f33e7b2200e530a6e0ced59e93c2666a9af96934e1fd38b6df4135a8c0d/crio/crio-db069dab43e9b156ff1343d48ab0b4029f443765d515554eeee0d612e55aa9d2/freezer.state
	I0816 12:42:44.053209 1431786 api_server.go:204] freezer state: "THAWED"
	I0816 12:42:44.053239 1431786 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0816 12:42:44.061167 1431786 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0816 12:42:44.061196 1431786 status.go:422] ha-137803 apiserver status = Running (err=<nil>)
	I0816 12:42:44.061209 1431786 status.go:257] ha-137803 status: &{Name:ha-137803 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 12:42:44.061227 1431786 status.go:255] checking status of ha-137803-m02 ...
	I0816 12:42:44.061547 1431786 cli_runner.go:164] Run: docker container inspect ha-137803-m02 --format={{.State.Status}}
	I0816 12:42:44.078563 1431786 status.go:330] ha-137803-m02 host status = "Stopped" (err=<nil>)
	I0816 12:42:44.078588 1431786 status.go:343] host is not running, skipping remaining checks
	I0816 12:42:44.078596 1431786 status.go:257] ha-137803-m02 status: &{Name:ha-137803-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 12:42:44.078614 1431786 status.go:255] checking status of ha-137803-m03 ...
	I0816 12:42:44.078921 1431786 cli_runner.go:164] Run: docker container inspect ha-137803-m03 --format={{.State.Status}}
	I0816 12:42:44.095230 1431786 status.go:330] ha-137803-m03 host status = "Running" (err=<nil>)
	I0816 12:42:44.095259 1431786 host.go:66] Checking if "ha-137803-m03" exists ...
	I0816 12:42:44.095597 1431786 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-137803-m03
	I0816 12:42:44.114587 1431786 host.go:66] Checking if "ha-137803-m03" exists ...
	I0816 12:42:44.115279 1431786 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 12:42:44.115376 1431786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-137803-m03
	I0816 12:42:44.150348 1431786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34620 SSHKeyPath:/home/jenkins/minikube-integration/19423-1381335/.minikube/machines/ha-137803-m03/id_rsa Username:docker}
	I0816 12:42:44.251828 1431786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 12:42:44.264294 1431786 kubeconfig.go:125] found "ha-137803" server: "https://192.168.49.254:8443"
	I0816 12:42:44.264322 1431786 api_server.go:166] Checking apiserver status ...
	I0816 12:42:44.264370 1431786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 12:42:44.275812 1431786 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1353/cgroup
	I0816 12:42:44.285384 1431786 api_server.go:182] apiserver freezer: "4:freezer:/docker/3b9cbd1d846c896a03f558124e888b906e86ee8223436419b605d1492253a8c6/crio/crio-978ea0e5a70b5bf9c8db50eeae3434e7e8c9ccd1a6874b1aae9407f9cd97577d"
	I0816 12:42:44.285459 1431786 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/3b9cbd1d846c896a03f558124e888b906e86ee8223436419b605d1492253a8c6/crio/crio-978ea0e5a70b5bf9c8db50eeae3434e7e8c9ccd1a6874b1aae9407f9cd97577d/freezer.state
	I0816 12:42:44.294395 1431786 api_server.go:204] freezer state: "THAWED"
	I0816 12:42:44.294422 1431786 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0816 12:42:44.302115 1431786 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0816 12:42:44.302146 1431786 status.go:422] ha-137803-m03 apiserver status = Running (err=<nil>)
	I0816 12:42:44.302157 1431786 status.go:257] ha-137803-m03 status: &{Name:ha-137803-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 12:42:44.302177 1431786 status.go:255] checking status of ha-137803-m04 ...
	I0816 12:42:44.302484 1431786 cli_runner.go:164] Run: docker container inspect ha-137803-m04 --format={{.State.Status}}
	I0816 12:42:44.318780 1431786 status.go:330] ha-137803-m04 host status = "Running" (err=<nil>)
	I0816 12:42:44.318807 1431786 host.go:66] Checking if "ha-137803-m04" exists ...
	I0816 12:42:44.319166 1431786 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-137803-m04
	I0816 12:42:44.357327 1431786 host.go:66] Checking if "ha-137803-m04" exists ...
	I0816 12:42:44.357909 1431786 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 12:42:44.357967 1431786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-137803-m04
	I0816 12:42:44.383941 1431786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34625 SSHKeyPath:/home/jenkins/minikube-integration/19423-1381335/.minikube/machines/ha-137803-m04/id_rsa Username:docker}
	I0816 12:42:44.478969 1431786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 12:42:44.492845 1431786 status.go:257] ha-137803-m04 status: &{Name:ha-137803-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (26.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 node start m02 -v=7 --alsologtostderr
E0816 12:42:46.983192 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/functional-890712/client.crt: no such file or directory" logger="UnhandledError"
E0816 12:42:57.225442 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/functional-890712/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-137803 node start m02 -v=7 --alsologtostderr: (24.683890434s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-137803 status -v=7 --alsologtostderr: (1.423889974s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (26.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (5.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (5.181092979s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (5.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (192.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-137803 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-137803 -v=7 --alsologtostderr
E0816 12:43:17.706761 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/functional-890712/client.crt: no such file or directory" logger="UnhandledError"
E0816 12:43:41.446244 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-137803 -v=7 --alsologtostderr: (36.854355927s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-137803 --wait=true -v=7 --alsologtostderr
E0816 12:43:58.668440 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/functional-890712/client.crt: no such file or directory" logger="UnhandledError"
E0816 12:44:09.151380 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/client.crt: no such file or directory" logger="UnhandledError"
E0816 12:45:20.590264 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/functional-890712/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-137803 --wait=true -v=7 --alsologtostderr: (2m35.44332443s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-137803
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (192.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-137803 node delete m03 -v=7 --alsologtostderr: (11.638802598s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-137803 stop -v=7 --alsologtostderr: (35.604422439s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-137803 status -v=7 --alsologtostderr: exit status 7 (110.658745ms)

                                                
                                                
-- stdout --
	ha-137803
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-137803-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-137803-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 12:47:17.819686 1446131 out.go:345] Setting OutFile to fd 1 ...
	I0816 12:47:17.819879 1446131 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 12:47:17.819910 1446131 out.go:358] Setting ErrFile to fd 2...
	I0816 12:47:17.819932 1446131 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 12:47:17.820176 1446131 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-1381335/.minikube/bin
	I0816 12:47:17.820395 1446131 out.go:352] Setting JSON to false
	I0816 12:47:17.820466 1446131 mustload.go:65] Loading cluster: ha-137803
	I0816 12:47:17.820529 1446131 notify.go:220] Checking for updates...
	I0816 12:47:17.820929 1446131 config.go:182] Loaded profile config "ha-137803": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 12:47:17.820962 1446131 status.go:255] checking status of ha-137803 ...
	I0816 12:47:17.821485 1446131 cli_runner.go:164] Run: docker container inspect ha-137803 --format={{.State.Status}}
	I0816 12:47:17.840056 1446131 status.go:330] ha-137803 host status = "Stopped" (err=<nil>)
	I0816 12:47:17.840079 1446131 status.go:343] host is not running, skipping remaining checks
	I0816 12:47:17.840087 1446131 status.go:257] ha-137803 status: &{Name:ha-137803 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 12:47:17.840111 1446131 status.go:255] checking status of ha-137803-m02 ...
	I0816 12:47:17.840447 1446131 cli_runner.go:164] Run: docker container inspect ha-137803-m02 --format={{.State.Status}}
	I0816 12:47:17.863278 1446131 status.go:330] ha-137803-m02 host status = "Stopped" (err=<nil>)
	I0816 12:47:17.863298 1446131 status.go:343] host is not running, skipping remaining checks
	I0816 12:47:17.863306 1446131 status.go:257] ha-137803-m02 status: &{Name:ha-137803-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 12:47:17.863328 1446131 status.go:255] checking status of ha-137803-m04 ...
	I0816 12:47:17.863688 1446131 cli_runner.go:164] Run: docker container inspect ha-137803-m04 --format={{.State.Status}}
	I0816 12:47:17.882981 1446131 status.go:330] ha-137803-m04 host status = "Stopped" (err=<nil>)
	I0816 12:47:17.883007 1446131 status.go:343] host is not running, skipping remaining checks
	I0816 12:47:17.883015 1446131 status.go:257] ha-137803-m04 status: &{Name:ha-137803-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (98.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-137803 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0816 12:47:36.729458 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/functional-890712/client.crt: no such file or directory" logger="UnhandledError"
E0816 12:48:04.432414 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/functional-890712/client.crt: no such file or directory" logger="UnhandledError"
E0816 12:48:41.446361 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-137803 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m37.826488444s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (98.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (76.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-137803 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-137803 --control-plane -v=7 --alsologtostderr: (1m15.211587612s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-137803 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-137803 status -v=7 --alsologtostderr: (1.038114944s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (76.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.78s)

                                                
                                    
x
+
TestJSONOutput/start/Command (52.45s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-018175 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-018175 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (52.450362591s)
--- PASS: TestJSONOutput/start/Command (52.45s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.82s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-018175 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.82s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-018175 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.89s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-018175 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-018175 --output=json --user=testUser: (5.886580645s)
--- PASS: TestJSONOutput/stop/Command (5.89s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-637286 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-637286 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (75.500638ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"65d8c388-4952-4e8c-8c61-5e9628129b43","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-637286] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"96b1963f-53e0-4e57-ad95-c03627f8d202","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19423"}}
	{"specversion":"1.0","id":"15698529-262f-4305-b5bb-e80b32968463","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8aca84e4-4620-4afb-be82-3bf4c3083d97","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19423-1381335/kubeconfig"}}
	{"specversion":"1.0","id":"4de087f8-e8ca-47dc-8d96-2e3e70717474","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-1381335/.minikube"}}
	{"specversion":"1.0","id":"832bcf34-8b0e-4fe2-9885-5bc746e0b706","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"4e316151-cd5c-45b0-9e35-7a9dea5be29a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0dc49c54-d45f-4d63-93d0-9eb0267bf898","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-637286" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-637286
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (39.91s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-330413 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-330413 --network=: (37.802347379s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-330413" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-330413
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-330413: (2.083492614s)
--- PASS: TestKicCustomNetwork/create_custom_network (39.91s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (35.61s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-075743 --network=bridge
E0816 12:52:36.730228 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/functional-890712/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-075743 --network=bridge: (33.60279689s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-075743" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-075743
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-075743: (1.983038562s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (35.61s)

                                                
                                    
x
+
TestKicExistingNetwork (34.21s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-402983 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-402983 --network=existing-network: (32.177472858s)
helpers_test.go:175: Cleaning up "existing-network-402983" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-402983
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-402983: (1.8899798s)
--- PASS: TestKicExistingNetwork (34.21s)

                                                
                                    
x
+
TestKicCustomSubnet (33.67s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-940844 --subnet=192.168.60.0/24
E0816 12:53:41.445887 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-940844 --subnet=192.168.60.0/24: (31.618692479s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-940844 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-940844" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-940844
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-940844: (2.028081065s)
--- PASS: TestKicCustomSubnet (33.67s)

                                                
                                    
x
+
TestKicStaticIP (33.78s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-532163 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-532163 --static-ip=192.168.200.200: (31.558616816s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-532163 ip
helpers_test.go:175: Cleaning up "static-ip-532163" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-532163
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-532163: (2.075391753s)
--- PASS: TestKicStaticIP (33.78s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (72.22s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-823150 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-823150 --driver=docker  --container-runtime=crio: (31.049720402s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-825890 --driver=docker  --container-runtime=crio
E0816 12:55:04.513879 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-825890 --driver=docker  --container-runtime=crio: (35.417333096s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-823150
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-825890
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-825890" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-825890
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-825890: (1.989853254s)
helpers_test.go:175: Cleaning up "first-823150" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-823150
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-823150: (2.322058991s)
--- PASS: TestMinikubeProfile (72.22s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.97s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-248650 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-248650 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.971120792s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.97s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-248650 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.17s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-261565 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-261565 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.169113922s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.17s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-261565 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-248650 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-248650 --alsologtostderr -v=5: (1.638786256s)
--- PASS: TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-261565 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-261565
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-261565: (1.198901782s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.45s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-261565
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-261565: (7.450767349s)
--- PASS: TestMountStart/serial/RestartStopped (8.45s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-261565 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (79.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-110791 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-110791 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m18.634078559s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110791 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (79.15s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-110791 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-110791 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-110791 -- rollout status deployment/busybox: (3.892232687s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-110791 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-110791 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-110791 -- exec busybox-7dff88458-lk2fs -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-110791 -- exec busybox-7dff88458-zd6sd -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-110791 -- exec busybox-7dff88458-lk2fs -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-110791 -- exec busybox-7dff88458-zd6sd -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-110791 -- exec busybox-7dff88458-lk2fs -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-110791 -- exec busybox-7dff88458-zd6sd -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.75s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-110791 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-110791 -- exec busybox-7dff88458-lk2fs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-110791 -- exec busybox-7dff88458-lk2fs -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-110791 -- exec busybox-7dff88458-zd6sd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-110791 -- exec busybox-7dff88458-zd6sd -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.94s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (29.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-110791 -v 3 --alsologtostderr
E0816 12:57:36.730156 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/functional-890712/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-110791 -v 3 --alsologtostderr: (28.368444614s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110791 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (29.02s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-110791 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.33s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110791 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110791 cp testdata/cp-test.txt multinode-110791:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110791 ssh -n multinode-110791 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110791 cp multinode-110791:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1657702186/001/cp-test_multinode-110791.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110791 ssh -n multinode-110791 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110791 cp multinode-110791:/home/docker/cp-test.txt multinode-110791-m02:/home/docker/cp-test_multinode-110791_multinode-110791-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110791 ssh -n multinode-110791 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110791 ssh -n multinode-110791-m02 "sudo cat /home/docker/cp-test_multinode-110791_multinode-110791-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110791 cp multinode-110791:/home/docker/cp-test.txt multinode-110791-m03:/home/docker/cp-test_multinode-110791_multinode-110791-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110791 ssh -n multinode-110791 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110791 ssh -n multinode-110791-m03 "sudo cat /home/docker/cp-test_multinode-110791_multinode-110791-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110791 cp testdata/cp-test.txt multinode-110791-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110791 ssh -n multinode-110791-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110791 cp multinode-110791-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1657702186/001/cp-test_multinode-110791-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110791 ssh -n multinode-110791-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110791 cp multinode-110791-m02:/home/docker/cp-test.txt multinode-110791:/home/docker/cp-test_multinode-110791-m02_multinode-110791.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110791 ssh -n multinode-110791-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110791 ssh -n multinode-110791 "sudo cat /home/docker/cp-test_multinode-110791-m02_multinode-110791.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110791 cp multinode-110791-m02:/home/docker/cp-test.txt multinode-110791-m03:/home/docker/cp-test_multinode-110791-m02_multinode-110791-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110791 ssh -n multinode-110791-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110791 ssh -n multinode-110791-m03 "sudo cat /home/docker/cp-test_multinode-110791-m02_multinode-110791-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110791 cp testdata/cp-test.txt multinode-110791-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110791 ssh -n multinode-110791-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110791 cp multinode-110791-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1657702186/001/cp-test_multinode-110791-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110791 ssh -n multinode-110791-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110791 cp multinode-110791-m03:/home/docker/cp-test.txt multinode-110791:/home/docker/cp-test_multinode-110791-m03_multinode-110791.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110791 ssh -n multinode-110791-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110791 ssh -n multinode-110791 "sudo cat /home/docker/cp-test_multinode-110791-m03_multinode-110791.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110791 cp multinode-110791-m03:/home/docker/cp-test.txt multinode-110791-m02:/home/docker/cp-test_multinode-110791-m03_multinode-110791-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110791 ssh -n multinode-110791-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110791 ssh -n multinode-110791-m02 "sudo cat /home/docker/cp-test_multinode-110791-m03_multinode-110791-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.16s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110791 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-110791 node stop m03: (1.211298629s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110791 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-110791 status: exit status 7 (512.641245ms)

                                                
                                                
-- stdout --
	multinode-110791
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-110791-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-110791-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110791 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-110791 status --alsologtostderr: exit status 7 (501.566414ms)

                                                
                                                
-- stdout --
	multinode-110791
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-110791-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-110791-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 12:58:11.211001 1499422 out.go:345] Setting OutFile to fd 1 ...
	I0816 12:58:11.211200 1499422 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 12:58:11.211209 1499422 out.go:358] Setting ErrFile to fd 2...
	I0816 12:58:11.211214 1499422 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 12:58:11.211441 1499422 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-1381335/.minikube/bin
	I0816 12:58:11.211635 1499422 out.go:352] Setting JSON to false
	I0816 12:58:11.211677 1499422 mustload.go:65] Loading cluster: multinode-110791
	I0816 12:58:11.211767 1499422 notify.go:220] Checking for updates...
	I0816 12:58:11.212104 1499422 config.go:182] Loaded profile config "multinode-110791": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 12:58:11.212124 1499422 status.go:255] checking status of multinode-110791 ...
	I0816 12:58:11.212948 1499422 cli_runner.go:164] Run: docker container inspect multinode-110791 --format={{.State.Status}}
	I0816 12:58:11.232432 1499422 status.go:330] multinode-110791 host status = "Running" (err=<nil>)
	I0816 12:58:11.232461 1499422 host.go:66] Checking if "multinode-110791" exists ...
	I0816 12:58:11.232781 1499422 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-110791
	I0816 12:58:11.259975 1499422 host.go:66] Checking if "multinode-110791" exists ...
	I0816 12:58:11.260281 1499422 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 12:58:11.260326 1499422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-110791
	I0816 12:58:11.279375 1499422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34730 SSHKeyPath:/home/jenkins/minikube-integration/19423-1381335/.minikube/machines/multinode-110791/id_rsa Username:docker}
	I0816 12:58:11.370839 1499422 ssh_runner.go:195] Run: systemctl --version
	I0816 12:58:11.378128 1499422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 12:58:11.390859 1499422 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0816 12:58:11.447827 1499422 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-08-16 12:58:11.437889776 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0816 12:58:11.448410 1499422 kubeconfig.go:125] found "multinode-110791" server: "https://192.168.67.2:8443"
	I0816 12:58:11.448445 1499422 api_server.go:166] Checking apiserver status ...
	I0816 12:58:11.448496 1499422 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 12:58:11.459771 1499422 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1386/cgroup
	I0816 12:58:11.469612 1499422 api_server.go:182] apiserver freezer: "4:freezer:/docker/3057f2d3e8c4164e4fd8de22c1887454dd185b4a412e781398e3b1d83f73842c/crio/crio-d89869a59ae6ab6a8d8ace3de786e33a97c0031c7b63fd4e179dc950dc2f5d52"
	I0816 12:58:11.469686 1499422 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/3057f2d3e8c4164e4fd8de22c1887454dd185b4a412e781398e3b1d83f73842c/crio/crio-d89869a59ae6ab6a8d8ace3de786e33a97c0031c7b63fd4e179dc950dc2f5d52/freezer.state
	I0816 12:58:11.478372 1499422 api_server.go:204] freezer state: "THAWED"
	I0816 12:58:11.478401 1499422 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0816 12:58:11.486134 1499422 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0816 12:58:11.486163 1499422 status.go:422] multinode-110791 apiserver status = Running (err=<nil>)
	I0816 12:58:11.486175 1499422 status.go:257] multinode-110791 status: &{Name:multinode-110791 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 12:58:11.486193 1499422 status.go:255] checking status of multinode-110791-m02 ...
	I0816 12:58:11.486512 1499422 cli_runner.go:164] Run: docker container inspect multinode-110791-m02 --format={{.State.Status}}
	I0816 12:58:11.503905 1499422 status.go:330] multinode-110791-m02 host status = "Running" (err=<nil>)
	I0816 12:58:11.503931 1499422 host.go:66] Checking if "multinode-110791-m02" exists ...
	I0816 12:58:11.504280 1499422 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-110791-m02
	I0816 12:58:11.520487 1499422 host.go:66] Checking if "multinode-110791-m02" exists ...
	I0816 12:58:11.520817 1499422 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 12:58:11.520872 1499422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-110791-m02
	I0816 12:58:11.537059 1499422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34735 SSHKeyPath:/home/jenkins/minikube-integration/19423-1381335/.minikube/machines/multinode-110791-m02/id_rsa Username:docker}
	I0816 12:58:11.630675 1499422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 12:58:11.642112 1499422 status.go:257] multinode-110791-m02 status: &{Name:multinode-110791-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0816 12:58:11.642149 1499422 status.go:255] checking status of multinode-110791-m03 ...
	I0816 12:58:11.642464 1499422 cli_runner.go:164] Run: docker container inspect multinode-110791-m03 --format={{.State.Status}}
	I0816 12:58:11.657929 1499422 status.go:330] multinode-110791-m03 host status = "Stopped" (err=<nil>)
	I0816 12:58:11.657953 1499422 status.go:343] host is not running, skipping remaining checks
	I0816 12:58:11.657959 1499422 status.go:257] multinode-110791-m03 status: &{Name:multinode-110791-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.23s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110791 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-110791 node start m03 -v=7 --alsologtostderr: (9.58912779s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110791 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.35s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (81.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-110791
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-110791
E0816 12:58:41.445959 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-110791: (24.872500339s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-110791 --wait=true -v=8 --alsologtostderr
E0816 12:58:59.794183 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/functional-890712/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-110791 --wait=true -v=8 --alsologtostderr: (56.025779818s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-110791
--- PASS: TestMultiNode/serial/RestartKeepsNodes (81.04s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110791 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-110791 node delete m03: (4.640172117s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110791 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.34s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (25.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110791 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-110791 stop: (24.889738731s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110791 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-110791 status: exit status 7 (89.230785ms)

                                                
                                                
-- stdout --
	multinode-110791
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-110791-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110791 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-110791 status --alsologtostderr: exit status 7 (88.581297ms)

                                                
                                                
-- stdout --
	multinode-110791
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-110791-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 13:00:13.407904 1506874 out.go:345] Setting OutFile to fd 1 ...
	I0816 13:00:13.408039 1506874 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 13:00:13.408050 1506874 out.go:358] Setting ErrFile to fd 2...
	I0816 13:00:13.408056 1506874 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 13:00:13.408277 1506874 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-1381335/.minikube/bin
	I0816 13:00:13.408462 1506874 out.go:352] Setting JSON to false
	I0816 13:00:13.408503 1506874 mustload.go:65] Loading cluster: multinode-110791
	I0816 13:00:13.408612 1506874 notify.go:220] Checking for updates...
	I0816 13:00:13.408922 1506874 config.go:182] Loaded profile config "multinode-110791": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 13:00:13.408944 1506874 status.go:255] checking status of multinode-110791 ...
	I0816 13:00:13.409418 1506874 cli_runner.go:164] Run: docker container inspect multinode-110791 --format={{.State.Status}}
	I0816 13:00:13.428275 1506874 status.go:330] multinode-110791 host status = "Stopped" (err=<nil>)
	I0816 13:00:13.428300 1506874 status.go:343] host is not running, skipping remaining checks
	I0816 13:00:13.428307 1506874 status.go:257] multinode-110791 status: &{Name:multinode-110791 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 13:00:13.428331 1506874 status.go:255] checking status of multinode-110791-m02 ...
	I0816 13:00:13.428633 1506874 cli_runner.go:164] Run: docker container inspect multinode-110791-m02 --format={{.State.Status}}
	I0816 13:00:13.452357 1506874 status.go:330] multinode-110791-m02 host status = "Stopped" (err=<nil>)
	I0816 13:00:13.452384 1506874 status.go:343] host is not running, skipping remaining checks
	I0816 13:00:13.452391 1506874 status.go:257] multinode-110791-m02 status: &{Name:multinode-110791-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (25.07s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (50.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-110791 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-110791 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (50.030259289s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110791 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (50.71s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (31.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-110791
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-110791-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-110791-m02 --driver=docker  --container-runtime=crio: exit status 14 (85.056378ms)

                                                
                                                
-- stdout --
	* [multinode-110791-m02] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-1381335/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-1381335/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-110791-m02' is duplicated with machine name 'multinode-110791-m02' in profile 'multinode-110791'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-110791-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-110791-m03 --driver=docker  --container-runtime=crio: (28.837730567s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-110791
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-110791: exit status 80 (324.714813ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-110791 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-110791-m03 already exists in multinode-110791-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-110791-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-110791-m03: (1.96858875s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (31.27s)

                                                
                                    
x
+
TestPreload (123.68s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-873949 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0816 13:02:36.729654 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/functional-890712/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-873949 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m32.562771395s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-873949 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-873949 image pull gcr.io/k8s-minikube/busybox: (1.90021799s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-873949
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-873949: (5.789691056s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-873949 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-873949 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (20.822607833s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-873949 image list
helpers_test.go:175: Cleaning up "test-preload-873949" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-873949
E0816 13:03:41.445953 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-873949: (2.334112583s)
--- PASS: TestPreload (123.68s)

                                                
                                    
x
+
TestScheduledStopUnix (108.17s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-872153 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-872153 --memory=2048 --driver=docker  --container-runtime=crio: (31.629481836s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-872153 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-872153 -n scheduled-stop-872153
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-872153 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-872153 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-872153 -n scheduled-stop-872153
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-872153
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-872153 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-872153
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-872153: exit status 7 (74.992815ms)

                                                
                                                
-- stdout --
	scheduled-stop-872153
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-872153 -n scheduled-stop-872153
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-872153 -n scheduled-stop-872153: exit status 7 (67.964752ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-872153" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-872153
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-872153: (4.966822157s)
--- PASS: TestScheduledStopUnix (108.17s)

                                                
                                    
x
+
TestInsufficientStorage (10.21s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-177363 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-177363 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.725854256s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"01153edf-fcf5-4968-89d5-f17a0bbc8c28","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-177363] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b2f4406e-274a-4c16-84f2-b7da68c3a522","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19423"}}
	{"specversion":"1.0","id":"9013a735-6e8a-4212-a233-99cb63ea656c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0f42c7a3-068c-4692-981a-39c34a4cb5ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19423-1381335/kubeconfig"}}
	{"specversion":"1.0","id":"caabde72-73ca-49c3-b5f4-1efee54b4074","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-1381335/.minikube"}}
	{"specversion":"1.0","id":"5aa29492-ef15-4180-9e2d-40d8533743fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"2a581b1c-16b5-47e4-9c36-091e73295c3f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"02a03246-5f54-4ddd-9d2c-e2aa8fff177a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"eaef5303-c5b7-4585-9722-b9f784656b6b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"b031103d-3053-4b1d-9dfa-a8375eb3f18f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"64a2869d-7120-449f-bbc3-40a34faa749c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"d26b983a-993b-4354-adcc-949389d469bc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-177363\" primary control-plane node in \"insufficient-storage-177363\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"88a4f3ed-1e1f-4ebd-9f59-a3bd54dadadd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1723650208-19443 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"4a1104c5-0e1f-47e9-b00d-3a196db024ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"de839bf7-082b-4511-bfde-fc228cb756f4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-177363 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-177363 --output=json --layout=cluster: exit status 7 (284.283573ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-177363","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-177363","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 13:05:39.423148 1524607 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-177363" does not appear in /home/jenkins/minikube-integration/19423-1381335/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-177363 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-177363 --output=json --layout=cluster: exit status 7 (279.620357ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-177363","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-177363","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 13:05:39.703461 1524670 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-177363" does not appear in /home/jenkins/minikube-integration/19423-1381335/kubeconfig
	E0816 13:05:39.713872 1524670 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/insufficient-storage-177363/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-177363" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-177363
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-177363: (1.923395353s)
--- PASS: TestInsufficientStorage (10.21s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (75.88s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1858136428 start -p running-upgrade-204415 --memory=2200 --vm-driver=docker  --container-runtime=crio
E0816 13:13:41.445993 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1858136428 start -p running-upgrade-204415 --memory=2200 --vm-driver=docker  --container-runtime=crio: (31.549332664s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-204415 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-204415 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (40.981858867s)
helpers_test.go:175: Cleaning up "running-upgrade-204415" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-204415
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-204415: (2.475198495s)
--- PASS: TestRunningBinaryUpgrade (75.88s)

                                                
                                    
x
+
TestKubernetesUpgrade (390.77s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-759910 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0816 13:12:36.733339 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/functional-890712/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-759910 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m4.520660026s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-759910
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-759910: (1.28694921s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-759910 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-759910 status --format={{.Host}}: exit status 7 (64.529201ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-759910 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-759910 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m39.742734613s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-759910 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-759910 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-759910 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (113.676731ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-759910] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-1381335/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-1381335/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-759910
	    minikube start -p kubernetes-upgrade-759910 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7599102 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-759910 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-759910 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-759910 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (41.915434982s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-759910" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-759910
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-759910: (2.97385877s)
--- PASS: TestKubernetesUpgrade (390.77s)

                                                
                                    
x
+
TestMissingContainerUpgrade (148.98s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3905709422 start -p missing-upgrade-242247 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3905709422 start -p missing-upgrade-242247 --memory=2200 --driver=docker  --container-runtime=crio: (1m12.117768883s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-242247
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-242247: (10.445402817s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-242247
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-242247 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-242247 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m3.475515441s)
helpers_test.go:175: Cleaning up "missing-upgrade-242247" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-242247
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-242247: (2.059261913s)
--- PASS: TestMissingContainerUpgrade (148.98s)

                                                
                                    
x
+
TestPause/serial/Start (56.7s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-861212 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-861212 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (56.695617285s)
--- PASS: TestPause/serial/Start (56.70s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (22.94s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-861212 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-861212 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (22.923618948s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (22.94s)

                                                
                                    
x
+
TestPause/serial/Pause (0.72s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-861212 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.72s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.33s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-861212 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-861212 --output=json --layout=cluster: exit status 2 (331.274548ms)

                                                
                                                
-- stdout --
	{"Name":"pause-861212","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-861212","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.33s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.65s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-861212 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.65s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.89s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-861212 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.89s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.63s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-861212 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-861212 --alsologtostderr -v=5: (2.629771019s)
--- PASS: TestPause/serial/DeletePaused (2.63s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.15s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-861212
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-861212: exit status 1 (15.93787ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-861212: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-246634 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-246634 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (81.313347ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-246634] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-1381335/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-1381335/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (33.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-246634 --driver=docker  --container-runtime=crio
E0816 13:07:36.729419 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/functional-890712/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-246634 --driver=docker  --container-runtime=crio: (32.988846392s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-246634 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (33.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (12.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-246634 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-246634 --no-kubernetes --driver=docker  --container-runtime=crio: (10.240447469s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-246634 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-246634 status -o json: exit status 2 (381.063398ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-246634","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-246634
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-246634: (2.203658222s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (12.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-246634 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-246634 --no-kubernetes --driver=docker  --container-runtime=crio: (10.338431037s)
--- PASS: TestNoKubernetes/serial/Start (10.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-246634 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-246634 "sudo systemctl is-active --quiet service kubelet": exit status 1 (493.037695ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (2.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-arm64 profile list: (1.813568108s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (2.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-246634
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-246634: (1.333149354s)
--- PASS: TestNoKubernetes/serial/Stop (1.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-246634 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-246634 --driver=docker  --container-runtime=crio: (7.464238285s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-727899 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-727899 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (217.035262ms)

                                                
                                                
-- stdout --
	* [false-727899] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-1381335/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-1381335/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 13:08:14.591558 1541872 out.go:345] Setting OutFile to fd 1 ...
	I0816 13:08:14.591790 1541872 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 13:08:14.591815 1541872 out.go:358] Setting ErrFile to fd 2...
	I0816 13:08:14.591834 1541872 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 13:08:14.592099 1541872 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-1381335/.minikube/bin
	I0816 13:08:14.592540 1541872 out.go:352] Setting JSON to false
	I0816 13:08:14.593476 1541872 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":39038,"bootTime":1723774657,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0816 13:08:14.593586 1541872 start.go:139] virtualization:  
	I0816 13:08:14.596585 1541872 out.go:177] * [false-727899] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0816 13:08:14.599539 1541872 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 13:08:14.599602 1541872 notify.go:220] Checking for updates...
	I0816 13:08:14.604165 1541872 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 13:08:14.605977 1541872 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-1381335/kubeconfig
	I0816 13:08:14.607742 1541872 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-1381335/.minikube
	I0816 13:08:14.609408 1541872 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0816 13:08:14.611919 1541872 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 13:08:14.615231 1541872 config.go:182] Loaded profile config "NoKubernetes-246634": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0816 13:08:14.615406 1541872 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 13:08:14.654877 1541872 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0816 13:08:14.654994 1541872 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0816 13:08:14.746011 1541872 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-16 13:08:14.736620967 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0816 13:08:14.746121 1541872 docker.go:307] overlay module found
	I0816 13:08:14.748678 1541872 out.go:177] * Using the docker driver based on user configuration
	I0816 13:08:14.750407 1541872 start.go:297] selected driver: docker
	I0816 13:08:14.750424 1541872 start.go:901] validating driver "docker" against <nil>
	I0816 13:08:14.750438 1541872 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 13:08:14.753301 1541872 out.go:201] 
	W0816 13:08:14.755191 1541872 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0816 13:08:14.757362 1541872 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-727899 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-727899

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-727899

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-727899

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-727899

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-727899

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-727899

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-727899

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-727899

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-727899

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-727899

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727899"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727899"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727899"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-727899

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727899"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727899"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-727899" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-727899" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-727899" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-727899" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-727899" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-727899" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-727899" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-727899" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727899"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727899"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727899"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727899"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727899"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-727899" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-727899" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-727899" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727899"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727899"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727899"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727899"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727899"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-727899

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727899"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727899"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727899"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727899"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727899"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727899"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727899"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727899"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727899"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727899"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727899"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727899"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727899"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727899"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727899"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727899"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727899"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727899"

                                                
                                                
----------------------- debugLogs end: false-727899 [took: 4.280498613s] --------------------------------
helpers_test.go:175: Cleaning up "false-727899" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-727899
--- PASS: TestNetworkPlugins/group/false (4.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-246634 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-246634 "sudo systemctl is-active --quiet service kubelet": exit status 1 (304.239331ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.79s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.79s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (111.79s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2714389974 start -p stopped-upgrade-188864 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2714389974 start -p stopped-upgrade-188864 --memory=2200 --vm-driver=docker  --container-runtime=crio: (32.186970447s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2714389974 -p stopped-upgrade-188864 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2714389974 -p stopped-upgrade-188864 stop: (2.601240333s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-188864 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-188864 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m17.00458296s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (111.79s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.18s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-188864
E0816 13:11:44.515673 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-188864: (1.177503582s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (53.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-727899 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-727899 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (53.672001713s)
--- PASS: TestNetworkPlugins/group/auto/Start (53.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-727899 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-727899 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-8zjpj" [ea5aeb3c-39b8-4bab-9c0e-ed803ffe45eb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-8zjpj" [ea5aeb3c-39b8-4bab-9c0e-ed803ffe45eb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.006977941s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-727899 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-727899 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-727899 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (52.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-727899 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-727899 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (52.681706315s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (52.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-ck54n" [b21f9205-c97c-4296-92b9-d384a277a113] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004987711s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-727899 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-727899 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-b2pbg" [0a3be195-ae83-47cd-af0b-ff44df46e094] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-b2pbg" [0a3be195-ae83-47cd-af0b-ff44df46e094] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.003654179s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-727899 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-727899 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-727899 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (69.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-727899 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E0816 13:17:36.730049 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/functional-890712/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-727899 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m9.372990439s)
--- PASS: TestNetworkPlugins/group/calico/Start (69.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (62.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-727899 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E0816 13:18:41.446279 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-727899 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m2.975161065s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (62.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-nmcdp" [ada68d10-d1c0-4973-86a7-460552b5ac17] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.017939404s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-727899 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-727899 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-z8pl9" [014a5a1e-3f5e-4665-990b-cd5d8918461e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-z8pl9" [014a5a1e-3f5e-4665-990b-cd5d8918461e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004767352s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-727899 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-727899 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-727899 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-727899 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-727899 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-fxzp9" [8c722154-ffdd-49d9-b5be-47fbd138fd7e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-fxzp9" [8c722154-ffdd-49d9-b5be-47fbd138fd7e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.0107311s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (82.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-727899 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-727899 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m22.562053161s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (82.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-727899 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-727899 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-727899 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (61.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-727899 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E0816 13:20:25.985907 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/auto-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:20:25.992269 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/auto-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:20:26.003665 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/auto-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:20:26.024972 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/auto-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:20:26.066276 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/auto-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:20:26.147600 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/auto-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:20:26.308994 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/auto-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:20:26.630712 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/auto-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:20:27.272237 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/auto-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:20:28.553544 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/auto-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:20:31.115958 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/auto-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:20:36.237685 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/auto-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:20:46.480703 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/auto-727899/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-727899 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m1.275323056s)
--- PASS: TestNetworkPlugins/group/flannel/Start (61.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-727899 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-727899 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-8z9h2" [e4ffed41-3678-49d5-b1b0-5193c00b087d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-8z9h2" [e4ffed41-3678-49d5-b1b0-5193c00b087d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.003967832s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-czrlp" [8dff2377-79b9-4a37-87d5-6febdbec182b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003977218s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-727899 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-727899 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-727899 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-727899 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (13.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-727899 replace --force -f testdata/netcat-deployment.yaml
E0816 13:21:06.963204 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/auto-727899/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-bp226" [58280c82-d48e-4768-884d-dce436fefc4c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-bp226" [58280c82-d48e-4768-884d-dce436fefc4c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 13.004364153s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (13.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-727899 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-727899 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-727899 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (73.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-727899 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-727899 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m13.844195055s)
--- PASS: TestNetworkPlugins/group/bridge/Start (73.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (190.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-621332 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E0816 13:21:47.925454 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/auto-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:21:50.209718 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/kindnet-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:21:50.216078 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/kindnet-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:21:50.227427 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/kindnet-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:21:50.248878 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/kindnet-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:21:50.290229 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/kindnet-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:21:50.371834 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/kindnet-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:21:50.533304 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/kindnet-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:21:50.854929 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/kindnet-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:21:51.496355 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/kindnet-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:21:52.777888 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/kindnet-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:21:55.339937 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/kindnet-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:22:00.461892 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/kindnet-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:22:10.703976 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/kindnet-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:22:31.186224 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/kindnet-727899/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-621332 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (3m10.945812507s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (190.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-727899 "pgrep -a kubelet"
E0816 13:22:36.732093 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/functional-890712/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (14.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-727899 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-5pk9z" [f3dff61d-ac87-4060-a77f-e540823adaf3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-5pk9z" [f3dff61d-ac87-4060-a77f-e540823adaf3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 14.004883261s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (14.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-727899 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-727899 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-727899 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)
E0816 13:35:49.399334 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/enable-default-cni-727899/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (60.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-159566 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
E0816 13:23:41.445807 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:23:43.950972 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/calico-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:23:43.957305 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/calico-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:23:43.968634 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/calico-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:23:43.989980 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/calico-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:23:44.031312 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/calico-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:23:44.112718 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/calico-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:23:44.274733 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/calico-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:23:44.596376 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/calico-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:23:45.237980 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/calico-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:23:46.520022 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/calico-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:23:49.081352 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/calico-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:23:54.203541 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/calico-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:24:04.445193 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/calico-727899/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-159566 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (1m0.898863708s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (60.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-159566 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [baf44ce3-85d5-47cf-a0a5-4ce2531624cc] Pending
helpers_test.go:344: "busybox" [baf44ce3-85d5-47cf-a0a5-4ce2531624cc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [baf44ce3-85d5-47cf-a0a5-4ce2531624cc] Running
E0816 13:24:21.233325 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/custom-flannel-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:24:21.239799 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/custom-flannel-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:24:21.251281 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/custom-flannel-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:24:21.272807 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/custom-flannel-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:24:21.314261 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/custom-flannel-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:24:21.395793 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/custom-flannel-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:24:21.557431 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/custom-flannel-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:24:21.879085 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/custom-flannel-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:24:22.520588 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/custom-flannel-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:24:23.802794 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/custom-flannel-727899/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004472208s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-159566 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-159566 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0816 13:24:24.926590 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/calico-727899/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-159566 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.037650926s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-159566 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-159566 --alsologtostderr -v=3
E0816 13:24:26.364904 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/custom-flannel-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:24:31.486684 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/custom-flannel-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:24:34.070005 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/kindnet-727899/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-159566 --alsologtostderr -v=3: (11.970037857s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-159566 -n no-preload-159566
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-159566 -n no-preload-159566: exit status 7 (76.618111ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-159566 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (290.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-159566 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
E0816 13:24:41.728129 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/custom-flannel-727899/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-159566 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (4m50.541382733s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-159566 -n no-preload-159566
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (290.95s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-621332 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a7ec3849-c533-46f5-8027-a023fe5a644d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a7ec3849-c533-46f5-8027-a023fe5a644d] Running
E0816 13:25:02.210106 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/custom-flannel-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:25:05.888787 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/calico-727899/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004215449s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-621332 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-621332 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-621332 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.318639086s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-621332 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-621332 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-621332 --alsologtostderr -v=3: (12.168165281s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-621332 -n old-k8s-version-621332
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-621332 -n old-k8s-version-621332: exit status 7 (75.578846ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-621332 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (148.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-621332 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E0816 13:25:25.985962 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/auto-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:25:43.171839 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/custom-flannel-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:25:49.399903 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/enable-default-cni-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:25:49.406324 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/enable-default-cni-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:25:49.417706 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/enable-default-cni-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:25:49.439167 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/enable-default-cni-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:25:49.480599 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/enable-default-cni-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:25:49.562203 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/enable-default-cni-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:25:49.723841 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/enable-default-cni-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:25:50.045141 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/enable-default-cni-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:25:50.686981 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/enable-default-cni-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:25:51.968803 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/enable-default-cni-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:25:53.688566 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/auto-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:25:54.530949 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/enable-default-cni-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:25:59.652937 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/enable-default-cni-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:26:00.308400 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/flannel-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:26:00.318154 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/flannel-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:26:00.335909 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/flannel-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:26:00.364665 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/flannel-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:26:00.410166 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/flannel-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:26:00.492344 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/flannel-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:26:00.653917 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/flannel-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:26:00.975596 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/flannel-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:26:01.617823 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/flannel-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:26:02.899573 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/flannel-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:26:05.461912 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/flannel-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:26:09.894902 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/enable-default-cni-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:26:10.584211 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/flannel-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:26:20.826003 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/flannel-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:26:27.810271 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/calico-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:26:30.376873 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/enable-default-cni-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:26:41.307248 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/flannel-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:26:50.209548 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/kindnet-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:27:05.093221 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/custom-flannel-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:27:11.339083 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/enable-default-cni-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:27:17.911868 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/kindnet-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:27:22.269538 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/flannel-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:27:36.729455 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/functional-890712/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:27:37.329817 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/bridge-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:27:37.336266 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/bridge-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:27:37.347617 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/bridge-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:27:37.368984 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/bridge-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:27:37.410479 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/bridge-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:27:37.492490 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/bridge-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:27:37.654000 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/bridge-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:27:37.975901 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/bridge-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:27:38.617297 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/bridge-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:27:39.899483 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/bridge-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:27:42.461082 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/bridge-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:27:47.583458 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/bridge-727899/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-621332 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m27.713503761s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-621332 -n old-k8s-version-621332
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (148.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-gch4g" [3c0d2b62-6149-4225-bf5b-5aa3b9af2cc1] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00351008s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-gch4g" [3c0d2b62-6149-4225-bf5b-5aa3b9af2cc1] Running
E0816 13:27:57.825701 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/bridge-727899/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005722799s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-621332 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-621332 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-621332 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-621332 -n old-k8s-version-621332
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-621332 -n old-k8s-version-621332: exit status 2 (334.601784ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-621332 -n old-k8s-version-621332
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-621332 -n old-k8s-version-621332: exit status 2 (343.733343ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-621332 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-621332 -n old-k8s-version-621332
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-621332 -n old-k8s-version-621332
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (51.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-222978 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
E0816 13:28:18.307999 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/bridge-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:28:24.517769 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:28:33.260437 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/enable-default-cni-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:28:41.445947 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:28:43.950396 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/calico-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:28:44.192151 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/flannel-727899/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-222978 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (51.30841627s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (51.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-222978 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [21b3111e-e968-460c-95d1-68976ef3857f] Pending
E0816 13:28:59.270289 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/bridge-727899/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [21b3111e-e968-460c-95d1-68976ef3857f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [21b3111e-e968-460c-95d1-68976ef3857f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004312698s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-222978 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-222978 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-222978 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-222978 --alsologtostderr -v=3
E0816 13:29:11.652227 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/calico-727899/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-222978 --alsologtostderr -v=3: (11.955825168s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-222978 -n embed-certs-222978
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-222978 -n embed-certs-222978: exit status 7 (72.232246ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-222978 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (303.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-222978 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
E0816 13:29:21.233201 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/custom-flannel-727899/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-222978 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (5m2.88984659s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-222978 -n embed-certs-222978
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (303.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-8ddzv" [3d2a44b0-445e-4464-ac16-99debc244190] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004515827s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-8ddzv" [3d2a44b0-445e-4464-ac16-99debc244190] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004870442s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-159566 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-159566 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-159566 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-159566 -n no-preload-159566
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-159566 -n no-preload-159566: exit status 2 (346.977228ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-159566 -n no-preload-159566
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-159566 -n no-preload-159566: exit status 2 (301.450817ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-159566 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-159566 -n no-preload-159566
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-159566 -n no-preload-159566
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.60s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (55.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-676632 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
E0816 13:29:48.934934 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/custom-flannel-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:29:57.920097 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/old-k8s-version-621332/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:29:57.926456 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/old-k8s-version-621332/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:29:57.937875 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/old-k8s-version-621332/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:29:57.959814 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/old-k8s-version-621332/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:29:58.013636 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/old-k8s-version-621332/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:29:58.095636 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/old-k8s-version-621332/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:29:58.257354 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/old-k8s-version-621332/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:29:58.579322 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/old-k8s-version-621332/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:29:59.221478 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/old-k8s-version-621332/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:30:00.508966 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/old-k8s-version-621332/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:30:03.070740 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/old-k8s-version-621332/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:30:08.192990 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/old-k8s-version-621332/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:30:18.435202 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/old-k8s-version-621332/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:30:21.192024 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/bridge-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:30:25.985704 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/auto-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:30:38.917338 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/old-k8s-version-621332/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-676632 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (55.456838698s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (55.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-676632 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4f728d8e-dc98-4ae7-8840-4fa275a882d1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4f728d8e-dc98-4ae7-8840-4fa275a882d1] Running
E0816 13:30:49.400349 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/enable-default-cni-727899/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004185001s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-676632 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-676632 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-676632 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.069162192s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-676632 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-676632 --alsologtostderr -v=3
E0816 13:31:00.307829 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/flannel-727899/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-676632 --alsologtostderr -v=3: (12.019272635s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-676632 -n default-k8s-diff-port-676632
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-676632 -n default-k8s-diff-port-676632: exit status 7 (72.433415ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-676632 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (291.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-676632 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
E0816 13:31:17.101969 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/enable-default-cni-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:31:19.879455 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/old-k8s-version-621332/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:31:28.034143 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/flannel-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:31:50.210567 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/kindnet-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:32:19.797452 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/functional-890712/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:32:36.730150 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/functional-890712/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:32:37.329960 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/bridge-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:32:41.801278 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/old-k8s-version-621332/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:33:05.034284 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/bridge-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:33:41.445705 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/addons-606349/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:33:43.950175 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/calico-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:34:15.978272 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/no-preload-159566/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:34:15.984685 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/no-preload-159566/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:34:15.996055 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/no-preload-159566/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:34:16.017544 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/no-preload-159566/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:34:16.059116 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/no-preload-159566/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:34:16.140674 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/no-preload-159566/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:34:16.302221 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/no-preload-159566/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:34:16.623947 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/no-preload-159566/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:34:17.266241 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/no-preload-159566/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:34:18.547623 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/no-preload-159566/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:34:21.109181 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/no-preload-159566/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:34:21.233898 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/custom-flannel-727899/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-676632 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (4m51.535091278s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-676632 -n default-k8s-diff-port-676632
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (291.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-7lndm" [86fcc49a-fcdc-4640-8b88-c7307636b320] Running
E0816 13:34:26.230796 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/no-preload-159566/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.024136629s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-7lndm" [86fcc49a-fcdc-4640-8b88-c7307636b320] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004910084s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-222978 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-222978 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-222978 --alsologtostderr -v=1
E0816 13:34:36.472754 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/no-preload-159566/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-222978 -n embed-certs-222978
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-222978 -n embed-certs-222978: exit status 2 (316.823617ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-222978 -n embed-certs-222978
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-222978 -n embed-certs-222978: exit status 2 (369.769866ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-222978 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-222978 -n embed-certs-222978
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-222978 -n embed-certs-222978
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (38.58s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-854909 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
E0816 13:34:56.954293 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/no-preload-159566/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:34:57.920447 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/old-k8s-version-621332/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-854909 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (38.579286783s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (38.58s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.87s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-854909 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-854909 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.869164424s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.87s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-854909 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-854909 --alsologtostderr -v=3: (1.285939254s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-854909 -n newest-cni-854909
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-854909 -n newest-cni-854909: exit status 7 (68.579361ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-854909 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (18.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-854909 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
E0816 13:35:25.643527 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/old-k8s-version-621332/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:35:25.986319 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/auto-727899/client.crt: no such file or directory" logger="UnhandledError"
E0816 13:35:37.916494 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/no-preload-159566/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-854909 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (17.847110152s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-854909 -n newest-cni-854909
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (18.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-854909 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.57s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-854909 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-854909 -n newest-cni-854909
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-854909 -n newest-cni-854909: exit status 2 (323.940884ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-854909 -n newest-cni-854909
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-854909 -n newest-cni-854909: exit status 2 (323.010971ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-854909 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-854909 -n newest-cni-854909
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-854909 -n newest-cni-854909
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.57s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-9962g" [4c338f9f-b2b4-443f-9b42-e38661e78012] Running
E0816 13:36:00.307889 1386707 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1381335/.minikube/profiles/flannel-727899/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003482844s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-9962g" [4c338f9f-b2b4-443f-9b42-e38661e78012] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004368311s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-676632 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-676632 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-676632 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-676632 -n default-k8s-diff-port-676632
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-676632 -n default-k8s-diff-port-676632: exit status 2 (321.516663ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-676632 -n default-k8s-diff-port-676632
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-676632 -n default-k8s-diff-port-676632: exit status 2 (337.209032ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-676632 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-676632 -n default-k8s-diff-port-676632
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-676632 -n default-k8s-diff-port-676632
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.05s)

                                                
                                    

Test skip (30/328)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.63s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-288613 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-288613" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-288613
--- SKIP: TestDownloadOnlyKic (0.63s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-727899 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-727899

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-727899

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-727899

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-727899

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-727899

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-727899

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-727899

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-727899

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-727899

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-727899

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727899"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727899"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727899"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-727899

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727899"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727899"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-727899" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-727899" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-727899" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-727899" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-727899" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-727899" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-727899" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-727899" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727899"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727899"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727899"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727899"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727899"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-727899" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-727899" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-727899" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727899"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727899"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727899"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727899"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727899"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-727899

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727899"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727899"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727899"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727899"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727899"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727899"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727899"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727899"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727899"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727899"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727899"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727899"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727899"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727899"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727899"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727899"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727899"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727899"

                                                
                                                
----------------------- debugLogs end: kubenet-727899 [took: 3.762145134s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-727899" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-727899
--- SKIP: TestNetworkPlugins/group/kubenet (3.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-727899 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-727899

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-727899

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-727899

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-727899

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-727899

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-727899

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-727899

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-727899

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-727899

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-727899

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727899"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727899"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727899"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-727899

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727899"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727899"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-727899" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-727899" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-727899" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-727899" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-727899" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-727899" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-727899" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-727899" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727899"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727899"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727899"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727899"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727899"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-727899

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-727899

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-727899" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-727899" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-727899

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-727899

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-727899" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-727899" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-727899" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-727899" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-727899" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727899"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727899"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727899"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727899"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727899"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-727899

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727899"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727899"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727899"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727899"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727899"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727899"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727899"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727899"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727899"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727899"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727899"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727899"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727899"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727899"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727899"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727899"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727899"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-727899" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727899"

                                                
                                                
----------------------- debugLogs end: cilium-727899 [took: 5.844448038s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-727899" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-727899
--- SKIP: TestNetworkPlugins/group/cilium (6.13s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-767471" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-767471
--- SKIP: TestStartStop/group/disable-driver-mounts (0.26s)

                                                
                                    
Copied to clipboard