Test Report: Docker_Linux_crio 19443

                    
                      8b84af123e21bffd183d137e5ca9151109c81e73:2024-08-15:35789
                    
                

Test fail (2/328)

Order failed test Duration
34 TestAddons/parallel/Ingress 150.46
36 TestAddons/parallel/MetricsServer 325.75
x
+
TestAddons/parallel/Ingress (150.46s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-877132 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-877132 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-877132 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [df4c372b-b171-4467-b9a5-23a7831fc55d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [df4c372b-b171-4467-b9a5-23a7831fc55d] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003435269s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-877132 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-877132 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.872764362s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-877132 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-877132 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-877132 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-877132 addons disable ingress-dns --alsologtostderr -v=1: (1.301408695s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-877132 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-877132 addons disable ingress --alsologtostderr -v=1: (7.588332695s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-877132
helpers_test.go:235: (dbg) docker inspect addons-877132:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0a128850adc6c9739319d0ccdc3a9eea5e6209a1908ca45931643f617a920748",
	        "Created": "2024-08-15T00:05:47.313639387Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 34174,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-15T00:05:47.430605182Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:49d4702e5c94195d7796cb79f5fbc9d7cc584c1c41f3c58bf1694d1da009b2f6",
	        "ResolvConfPath": "/var/lib/docker/containers/0a128850adc6c9739319d0ccdc3a9eea5e6209a1908ca45931643f617a920748/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0a128850adc6c9739319d0ccdc3a9eea5e6209a1908ca45931643f617a920748/hostname",
	        "HostsPath": "/var/lib/docker/containers/0a128850adc6c9739319d0ccdc3a9eea5e6209a1908ca45931643f617a920748/hosts",
	        "LogPath": "/var/lib/docker/containers/0a128850adc6c9739319d0ccdc3a9eea5e6209a1908ca45931643f617a920748/0a128850adc6c9739319d0ccdc3a9eea5e6209a1908ca45931643f617a920748-json.log",
	        "Name": "/addons-877132",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-877132:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-877132",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/076552733d609200f850ab223a0029186d490bef6b897443d3c21b9f8104b811-init/diff:/var/lib/docker/overlay2/0205a5511280a28ae3b2781b04e306ca3ba6d39df24866040bde00e4e577fc69/diff",
	                "MergedDir": "/var/lib/docker/overlay2/076552733d609200f850ab223a0029186d490bef6b897443d3c21b9f8104b811/merged",
	                "UpperDir": "/var/lib/docker/overlay2/076552733d609200f850ab223a0029186d490bef6b897443d3c21b9f8104b811/diff",
	                "WorkDir": "/var/lib/docker/overlay2/076552733d609200f850ab223a0029186d490bef6b897443d3c21b9f8104b811/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-877132",
	                "Source": "/var/lib/docker/volumes/addons-877132/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-877132",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-877132",
	                "name.minikube.sigs.k8s.io": "addons-877132",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6a0fbe5e4a1988f743bcdf7dea1f27c6a575bb4991e0dc783f167f6a2c62a4ac",
	            "SandboxKey": "/var/run/docker/netns/6a0fbe5e4a19",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-877132": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "92741e9c6adef761a12cc5aa129b7ea5de95847ec3af60896db99bb0f8592a7c",
	                    "EndpointID": "e03ef1cf5500ca2f0df1215461c824d1aaac3f152cbba89e7dd5d59184418014",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-877132",
	                        "0a128850adc6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-877132 -n addons-877132
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-877132 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-877132 logs -n 25: (1.032065266s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | --download-only -p                                                                          | download-docker-237330 | jenkins | v1.33.1 | 15 Aug 24 00:05 UTC |                     |
	|         | download-docker-237330                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-237330                                                                   | download-docker-237330 | jenkins | v1.33.1 | 15 Aug 24 00:05 UTC | 15 Aug 24 00:05 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-616195   | jenkins | v1.33.1 | 15 Aug 24 00:05 UTC |                     |
	|         | binary-mirror-616195                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:46729                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-616195                                                                     | binary-mirror-616195   | jenkins | v1.33.1 | 15 Aug 24 00:05 UTC | 15 Aug 24 00:05 UTC |
	| addons  | disable dashboard -p                                                                        | addons-877132          | jenkins | v1.33.1 | 15 Aug 24 00:05 UTC |                     |
	|         | addons-877132                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-877132          | jenkins | v1.33.1 | 15 Aug 24 00:05 UTC |                     |
	|         | addons-877132                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-877132 --wait=true                                                                | addons-877132          | jenkins | v1.33.1 | 15 Aug 24 00:05 UTC | 15 Aug 24 00:08 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |         |         |                     |                     |
	| addons  | addons-877132 addons disable                                                                | addons-877132          | jenkins | v1.33.1 | 15 Aug 24 00:08 UTC | 15 Aug 24 00:08 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-877132 addons disable                                                                | addons-877132          | jenkins | v1.33.1 | 15 Aug 24 00:08 UTC | 15 Aug 24 00:08 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| ssh     | addons-877132 ssh cat                                                                       | addons-877132          | jenkins | v1.33.1 | 15 Aug 24 00:08 UTC | 15 Aug 24 00:08 UTC |
	|         | /opt/local-path-provisioner/pvc-56d7ae18-0d09-496f-9576-9fd79c71aa37_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-877132 addons disable                                                                | addons-877132          | jenkins | v1.33.1 | 15 Aug 24 00:08 UTC | 15 Aug 24 00:08 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-877132          | jenkins | v1.33.1 | 15 Aug 24 00:08 UTC | 15 Aug 24 00:08 UTC |
	|         | -p addons-877132                                                                            |                        |         |         |                     |                     |
	| ip      | addons-877132 ip                                                                            | addons-877132          | jenkins | v1.33.1 | 15 Aug 24 00:08 UTC | 15 Aug 24 00:08 UTC |
	| addons  | addons-877132 addons disable                                                                | addons-877132          | jenkins | v1.33.1 | 15 Aug 24 00:08 UTC | 15 Aug 24 00:08 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-877132          | jenkins | v1.33.1 | 15 Aug 24 00:08 UTC | 15 Aug 24 00:08 UTC |
	|         | addons-877132                                                                               |                        |         |         |                     |                     |
	| addons  | addons-877132 addons disable                                                                | addons-877132          | jenkins | v1.33.1 | 15 Aug 24 00:08 UTC | 15 Aug 24 00:08 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-877132          | jenkins | v1.33.1 | 15 Aug 24 00:08 UTC | 15 Aug 24 00:08 UTC |
	|         | -p addons-877132                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-877132          | jenkins | v1.33.1 | 15 Aug 24 00:08 UTC | 15 Aug 24 00:09 UTC |
	|         | addons-877132                                                                               |                        |         |         |                     |                     |
	| addons  | addons-877132 addons disable                                                                | addons-877132          | jenkins | v1.33.1 | 15 Aug 24 00:09 UTC | 15 Aug 24 00:09 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ssh     | addons-877132 ssh curl -s                                                                   | addons-877132          | jenkins | v1.33.1 | 15 Aug 24 00:09 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | addons-877132 addons                                                                        | addons-877132          | jenkins | v1.33.1 | 15 Aug 24 00:09 UTC | 15 Aug 24 00:09 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-877132 addons                                                                        | addons-877132          | jenkins | v1.33.1 | 15 Aug 24 00:09 UTC | 15 Aug 24 00:09 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-877132 ip                                                                            | addons-877132          | jenkins | v1.33.1 | 15 Aug 24 00:11 UTC | 15 Aug 24 00:11 UTC |
	| addons  | addons-877132 addons disable                                                                | addons-877132          | jenkins | v1.33.1 | 15 Aug 24 00:11 UTC | 15 Aug 24 00:11 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-877132 addons disable                                                                | addons-877132          | jenkins | v1.33.1 | 15 Aug 24 00:11 UTC | 15 Aug 24 00:11 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 00:05:23
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 00:05:23.654201   33429 out.go:291] Setting OutFile to fd 1 ...
	I0815 00:05:23.654618   33429 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:05:23.654663   33429 out.go:304] Setting ErrFile to fd 2...
	I0815 00:05:23.654681   33429 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:05:23.655134   33429 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-25263/.minikube/bin
	I0815 00:05:23.655982   33429 out.go:298] Setting JSON to false
	I0815 00:05:23.656755   33429 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6461,"bootTime":1723673863,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 00:05:23.656807   33429 start.go:139] virtualization: kvm guest
	I0815 00:05:23.658523   33429 out.go:177] * [addons-877132] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 00:05:23.659971   33429 notify.go:220] Checking for updates...
	I0815 00:05:23.659982   33429 out.go:177]   - MINIKUBE_LOCATION=19443
	I0815 00:05:23.661059   33429 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 00:05:23.662403   33429 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19443-25263/kubeconfig
	I0815 00:05:23.663582   33429 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-25263/.minikube
	I0815 00:05:23.664704   33429 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 00:05:23.665903   33429 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 00:05:23.667224   33429 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 00:05:23.687835   33429 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0815 00:05:23.687962   33429 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 00:05:23.732664   33429 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-15 00:05:23.724426498 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0815 00:05:23.732801   33429 docker.go:307] overlay module found
	I0815 00:05:23.734680   33429 out.go:177] * Using the docker driver based on user configuration
	I0815 00:05:23.735854   33429 start.go:297] selected driver: docker
	I0815 00:05:23.735875   33429 start.go:901] validating driver "docker" against <nil>
	I0815 00:05:23.735889   33429 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 00:05:23.736663   33429 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 00:05:23.783497   33429 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-15 00:05:23.775412376 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0815 00:05:23.783655   33429 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 00:05:23.783845   33429 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 00:05:23.785330   33429 out.go:177] * Using Docker driver with root privileges
	I0815 00:05:23.786691   33429 cni.go:84] Creating CNI manager for ""
	I0815 00:05:23.786706   33429 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0815 00:05:23.786715   33429 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0815 00:05:23.786761   33429 start.go:340] cluster config:
	{Name:addons-877132 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-877132 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 00:05:23.787982   33429 out.go:177] * Starting "addons-877132" primary control-plane node in "addons-877132" cluster
	I0815 00:05:23.789023   33429 cache.go:121] Beginning downloading kic base image for docker with crio
	I0815 00:05:23.790242   33429 out.go:177] * Pulling base image v0.0.44-1723650208-19443 ...
	I0815 00:05:23.791298   33429 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 00:05:23.791325   33429 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19443-25263/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0815 00:05:23.791336   33429 cache.go:56] Caching tarball of preloaded images
	I0815 00:05:23.791373   33429 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local docker daemon
	I0815 00:05:23.791398   33429 preload.go:172] Found /home/jenkins/minikube-integration/19443-25263/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 00:05:23.791407   33429 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 00:05:23.791714   33429 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/config.json ...
	I0815 00:05:23.791738   33429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/config.json: {Name:mk5c91fbc1c1fde61b892ae0ae5591fd2dd76b2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:05:23.805688   33429 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 to local cache
	I0815 00:05:23.805810   33429 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local cache directory
	I0815 00:05:23.805828   33429 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local cache directory, skipping pull
	I0815 00:05:23.805832   33429 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 exists in cache, skipping pull
	I0815 00:05:23.805840   33429 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 as a tarball
	I0815 00:05:23.805847   33429 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 from local cache
	I0815 00:05:35.207757   33429 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 from cached tarball
	I0815 00:05:35.207800   33429 cache.go:194] Successfully downloaded all kic artifacts
	I0815 00:05:35.207842   33429 start.go:360] acquireMachinesLock for addons-877132: {Name:mk87c4769b05652828bbd513a339608563304c52 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 00:05:35.207952   33429 start.go:364] duration metric: took 89.15µs to acquireMachinesLock for "addons-877132"
	I0815 00:05:35.207977   33429 start.go:93] Provisioning new machine with config: &{Name:addons-877132 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-877132 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 00:05:35.208064   33429 start.go:125] createHost starting for "" (driver="docker")
	I0815 00:05:35.209932   33429 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0815 00:05:35.210140   33429 start.go:159] libmachine.API.Create for "addons-877132" (driver="docker")
	I0815 00:05:35.210169   33429 client.go:168] LocalClient.Create starting
	I0815 00:05:35.210265   33429 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19443-25263/.minikube/certs/ca.pem
	I0815 00:05:35.403780   33429 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19443-25263/.minikube/certs/cert.pem
	I0815 00:05:35.581910   33429 cli_runner.go:164] Run: docker network inspect addons-877132 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0815 00:05:35.597259   33429 cli_runner.go:211] docker network inspect addons-877132 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0815 00:05:35.597337   33429 network_create.go:284] running [docker network inspect addons-877132] to gather additional debugging logs...
	I0815 00:05:35.597356   33429 cli_runner.go:164] Run: docker network inspect addons-877132
	W0815 00:05:35.612656   33429 cli_runner.go:211] docker network inspect addons-877132 returned with exit code 1
	I0815 00:05:35.612683   33429 network_create.go:287] error running [docker network inspect addons-877132]: docker network inspect addons-877132: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-877132 not found
	I0815 00:05:35.612694   33429 network_create.go:289] output of [docker network inspect addons-877132]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-877132 not found
	
	** /stderr **
	I0815 00:05:35.612781   33429 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0815 00:05:35.628068   33429 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0000157c0}
	I0815 00:05:35.628115   33429 network_create.go:124] attempt to create docker network addons-877132 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0815 00:05:35.628158   33429 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-877132 addons-877132
	I0815 00:05:35.684711   33429 network_create.go:108] docker network addons-877132 192.168.49.0/24 created
	I0815 00:05:35.684740   33429 kic.go:121] calculated static IP "192.168.49.2" for the "addons-877132" container
	I0815 00:05:35.684801   33429 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0815 00:05:35.699815   33429 cli_runner.go:164] Run: docker volume create addons-877132 --label name.minikube.sigs.k8s.io=addons-877132 --label created_by.minikube.sigs.k8s.io=true
	I0815 00:05:35.715691   33429 oci.go:103] Successfully created a docker volume addons-877132
	I0815 00:05:35.715787   33429 cli_runner.go:164] Run: docker run --rm --name addons-877132-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-877132 --entrypoint /usr/bin/test -v addons-877132:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 -d /var/lib
	I0815 00:05:42.917047   33429 cli_runner.go:217] Completed: docker run --rm --name addons-877132-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-877132 --entrypoint /usr/bin/test -v addons-877132:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 -d /var/lib: (7.201218931s)
	I0815 00:05:42.917075   33429 oci.go:107] Successfully prepared a docker volume addons-877132
	I0815 00:05:42.917090   33429 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 00:05:42.917109   33429 kic.go:194] Starting extracting preloaded images to volume ...
	I0815 00:05:42.917177   33429 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19443-25263/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-877132:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 -I lz4 -xf /preloaded.tar -C /extractDir
	I0815 00:05:47.252511   33429 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19443-25263/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-877132:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 -I lz4 -xf /preloaded.tar -C /extractDir: (4.335289814s)
	I0815 00:05:47.252538   33429 kic.go:203] duration metric: took 4.335426883s to extract preloaded images to volume ...
	W0815 00:05:47.252667   33429 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0815 00:05:47.252767   33429 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0815 00:05:47.299562   33429 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-877132 --name addons-877132 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-877132 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-877132 --network addons-877132 --ip 192.168.49.2 --volume addons-877132:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002
	I0815 00:05:47.614924   33429 cli_runner.go:164] Run: docker container inspect addons-877132 --format={{.State.Running}}
	I0815 00:05:47.633132   33429 cli_runner.go:164] Run: docker container inspect addons-877132 --format={{.State.Status}}
	I0815 00:05:47.650026   33429 cli_runner.go:164] Run: docker exec addons-877132 stat /var/lib/dpkg/alternatives/iptables
	I0815 00:05:47.690704   33429 oci.go:144] the created container "addons-877132" has a running status.
	I0815 00:05:47.690734   33429 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19443-25263/.minikube/machines/addons-877132/id_rsa...
	I0815 00:05:47.887374   33429 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19443-25263/.minikube/machines/addons-877132/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0815 00:05:47.912208   33429 cli_runner.go:164] Run: docker container inspect addons-877132 --format={{.State.Status}}
	I0815 00:05:47.932744   33429 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0815 00:05:47.932762   33429 kic_runner.go:114] Args: [docker exec --privileged addons-877132 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0815 00:05:47.981634   33429 cli_runner.go:164] Run: docker container inspect addons-877132 --format={{.State.Status}}
	I0815 00:05:47.999627   33429 machine.go:94] provisionDockerMachine start ...
	I0815 00:05:47.999690   33429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877132
	I0815 00:05:48.016577   33429 main.go:141] libmachine: Using SSH client type: native
	I0815 00:05:48.016770   33429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0815 00:05:48.016782   33429 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 00:05:48.232779   33429 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-877132
	
	I0815 00:05:48.232815   33429 ubuntu.go:169] provisioning hostname "addons-877132"
	I0815 00:05:48.232872   33429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877132
	I0815 00:05:48.251859   33429 main.go:141] libmachine: Using SSH client type: native
	I0815 00:05:48.252026   33429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0815 00:05:48.252041   33429 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-877132 && echo "addons-877132" | sudo tee /etc/hostname
	I0815 00:05:48.391228   33429 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-877132
	
	I0815 00:05:48.391307   33429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877132
	I0815 00:05:48.407474   33429 main.go:141] libmachine: Using SSH client type: native
	I0815 00:05:48.407658   33429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0815 00:05:48.407674   33429 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-877132' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-877132/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-877132' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 00:05:48.537347   33429 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 00:05:48.537372   33429 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19443-25263/.minikube CaCertPath:/home/jenkins/minikube-integration/19443-25263/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19443-25263/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19443-25263/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19443-25263/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19443-25263/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19443-25263/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19443-25263/.minikube}
	I0815 00:05:48.537409   33429 ubuntu.go:177] setting up certificates
	I0815 00:05:48.537421   33429 provision.go:84] configureAuth start
	I0815 00:05:48.537467   33429 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-877132
	I0815 00:05:48.553566   33429 provision.go:143] copyHostCerts
	I0815 00:05:48.553637   33429 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-25263/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19443-25263/.minikube/key.pem (1675 bytes)
	I0815 00:05:48.553746   33429 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-25263/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19443-25263/.minikube/ca.pem (1078 bytes)
	I0815 00:05:48.553868   33429 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-25263/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19443-25263/.minikube/cert.pem (1123 bytes)
	I0815 00:05:48.553930   33429 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19443-25263/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19443-25263/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19443-25263/.minikube/certs/ca-key.pem org=jenkins.addons-877132 san=[127.0.0.1 192.168.49.2 addons-877132 localhost minikube]
	I0815 00:05:48.723505   33429 provision.go:177] copyRemoteCerts
	I0815 00:05:48.723557   33429 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 00:05:48.723588   33429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877132
	I0815 00:05:48.739526   33429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19443-25263/.minikube/machines/addons-877132/id_rsa Username:docker}
	I0815 00:05:48.837635   33429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-25263/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0815 00:05:48.857192   33429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-25263/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0815 00:05:48.876384   33429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-25263/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 00:05:48.895738   33429 provision.go:87] duration metric: took 358.301506ms to configureAuth
	I0815 00:05:48.895761   33429 ubuntu.go:193] setting minikube options for container-runtime
	I0815 00:05:48.895946   33429 config.go:182] Loaded profile config "addons-877132": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:05:48.896036   33429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877132
	I0815 00:05:48.911607   33429 main.go:141] libmachine: Using SSH client type: native
	I0815 00:05:48.911755   33429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0815 00:05:48.911770   33429 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 00:05:49.120408   33429 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 00:05:49.120437   33429 machine.go:97] duration metric: took 1.12079224s to provisionDockerMachine
	I0815 00:05:49.120452   33429 client.go:171] duration metric: took 13.910275572s to LocalClient.Create
	I0815 00:05:49.120476   33429 start.go:167] duration metric: took 13.910334619s to libmachine.API.Create "addons-877132"
	I0815 00:05:49.120490   33429 start.go:293] postStartSetup for "addons-877132" (driver="docker")
	I0815 00:05:49.120505   33429 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 00:05:49.120592   33429 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 00:05:49.120645   33429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877132
	I0815 00:05:49.135907   33429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19443-25263/.minikube/machines/addons-877132/id_rsa Username:docker}
	I0815 00:05:49.229819   33429 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 00:05:49.232457   33429 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0815 00:05:49.232497   33429 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0815 00:05:49.232511   33429 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0815 00:05:49.232522   33429 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0815 00:05:49.232534   33429 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-25263/.minikube/addons for local assets ...
	I0815 00:05:49.232593   33429 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-25263/.minikube/files for local assets ...
	I0815 00:05:49.232614   33429 start.go:296] duration metric: took 112.117099ms for postStartSetup
	I0815 00:05:49.232863   33429 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-877132
	I0815 00:05:49.248484   33429 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/config.json ...
	I0815 00:05:49.248733   33429 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 00:05:49.248790   33429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877132
	I0815 00:05:49.263312   33429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19443-25263/.minikube/machines/addons-877132/id_rsa Username:docker}
	I0815 00:05:49.354018   33429 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0815 00:05:49.357822   33429 start.go:128] duration metric: took 14.149744159s to createHost
	I0815 00:05:49.357843   33429 start.go:83] releasing machines lock for "addons-877132", held for 14.149879091s
	I0815 00:05:49.357891   33429 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-877132
	I0815 00:05:49.373827   33429 ssh_runner.go:195] Run: cat /version.json
	I0815 00:05:49.373875   33429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877132
	I0815 00:05:49.373874   33429 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 00:05:49.373952   33429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877132
	I0815 00:05:49.388848   33429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19443-25263/.minikube/machines/addons-877132/id_rsa Username:docker}
	I0815 00:05:49.389550   33429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19443-25263/.minikube/machines/addons-877132/id_rsa Username:docker}
	I0815 00:05:49.544079   33429 ssh_runner.go:195] Run: systemctl --version
	I0815 00:05:49.547823   33429 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 00:05:49.682891   33429 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0815 00:05:49.686787   33429 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 00:05:49.702937   33429 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0815 00:05:49.703005   33429 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 00:05:49.726571   33429 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0815 00:05:49.726594   33429 start.go:495] detecting cgroup driver to use...
	I0815 00:05:49.726621   33429 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0815 00:05:49.726658   33429 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 00:05:49.739246   33429 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 00:05:49.748243   33429 docker.go:217] disabling cri-docker service (if available) ...
	I0815 00:05:49.748292   33429 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 00:05:49.759758   33429 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 00:05:49.771605   33429 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 00:05:49.845117   33429 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 00:05:49.920932   33429 docker.go:233] disabling docker service ...
	I0815 00:05:49.920986   33429 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 00:05:49.936575   33429 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 00:05:49.945679   33429 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 00:05:50.020526   33429 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 00:05:50.097001   33429 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 00:05:50.106254   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 00:05:50.119192   33429 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 00:05:50.119247   33429 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:05:50.126943   33429 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 00:05:50.126988   33429 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:05:50.134580   33429 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:05:50.142147   33429 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:05:50.149864   33429 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 00:05:50.156952   33429 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:05:50.164563   33429 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:05:50.177100   33429 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:05:50.184728   33429 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 00:05:50.191170   33429 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 00:05:50.197628   33429 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 00:05:50.267275   33429 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 00:05:50.361312   33429 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 00:05:50.361385   33429 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 00:05:50.364378   33429 start.go:563] Will wait 60s for crictl version
	I0815 00:05:50.364426   33429 ssh_runner.go:195] Run: which crictl
	I0815 00:05:50.367117   33429 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 00:05:50.397013   33429 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0815 00:05:50.397116   33429 ssh_runner.go:195] Run: crio --version
	I0815 00:05:50.429244   33429 ssh_runner.go:195] Run: crio --version
	I0815 00:05:50.461529   33429 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.24.6 ...
	I0815 00:05:50.462727   33429 cli_runner.go:164] Run: docker network inspect addons-877132 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0815 00:05:50.477480   33429 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0815 00:05:50.480493   33429 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 00:05:50.489550   33429 kubeadm.go:883] updating cluster {Name:addons-877132 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-877132 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 00:05:50.489649   33429 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 00:05:50.489701   33429 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 00:05:50.550221   33429 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 00:05:50.550242   33429 crio.go:433] Images already preloaded, skipping extraction
	I0815 00:05:50.550279   33429 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 00:05:50.579201   33429 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 00:05:50.579222   33429 cache_images.go:84] Images are preloaded, skipping loading
	I0815 00:05:50.579229   33429 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 crio true true} ...
	I0815 00:05:50.579313   33429 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-877132 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-877132 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 00:05:50.579367   33429 ssh_runner.go:195] Run: crio config
	I0815 00:05:50.616570   33429 cni.go:84] Creating CNI manager for ""
	I0815 00:05:50.616587   33429 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0815 00:05:50.616596   33429 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 00:05:50.616615   33429 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-877132 NodeName:addons-877132 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 00:05:50.616737   33429 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-877132"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 00:05:50.616787   33429 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 00:05:50.624272   33429 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 00:05:50.624316   33429 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 00:05:50.631299   33429 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0815 00:05:50.645652   33429 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 00:05:50.660401   33429 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0815 00:05:50.674927   33429 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0815 00:05:50.677624   33429 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 00:05:50.686437   33429 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 00:05:50.757391   33429 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 00:05:50.768422   33429 certs.go:68] Setting up /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132 for IP: 192.168.49.2
	I0815 00:05:50.768442   33429 certs.go:194] generating shared ca certs ...
	I0815 00:05:50.768461   33429 certs.go:226] acquiring lock for ca certs: {Name:mk309157fa54119ea004edf6a36596f33b512455 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:05:50.768591   33429 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19443-25263/.minikube/ca.key
	I0815 00:05:51.184009   33429 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19443-25263/.minikube/ca.crt ...
	I0815 00:05:51.184041   33429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-25263/.minikube/ca.crt: {Name:mk2281b087378b5171f6a3ababac7c23d91f7a2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:05:51.184205   33429 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19443-25263/.minikube/ca.key ...
	I0815 00:05:51.184215   33429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-25263/.minikube/ca.key: {Name:mk7f28e7104766f3bc3ab7a26fee1d70165eac48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:05:51.184292   33429 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19443-25263/.minikube/proxy-client-ca.key
	I0815 00:05:51.306696   33429 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19443-25263/.minikube/proxy-client-ca.crt ...
	I0815 00:05:51.306724   33429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-25263/.minikube/proxy-client-ca.crt: {Name:mk007ceaa696b48cf9b73125039c9ff11d73a36e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:05:51.306876   33429 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19443-25263/.minikube/proxy-client-ca.key ...
	I0815 00:05:51.306886   33429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-25263/.minikube/proxy-client-ca.key: {Name:mk6d0aefb75ddffa612443a728f4dc6aa04f663c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:05:51.307002   33429 certs.go:256] generating profile certs ...
	I0815 00:05:51.307058   33429 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/client.key
	I0815 00:05:51.307071   33429 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/client.crt with IP's: []
	I0815 00:05:51.500129   33429 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/client.crt ...
	I0815 00:05:51.500154   33429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/client.crt: {Name:mk439bedf422c6d72db5acc435a7cea939a2f4f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:05:51.500292   33429 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/client.key ...
	I0815 00:05:51.500301   33429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/client.key: {Name:mk3dc5113cd977cffed1c4766b6188c8c37f9ef1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:05:51.500364   33429 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/apiserver.key.e7c27cbf
	I0815 00:05:51.500381   33429 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/apiserver.crt.e7c27cbf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0815 00:05:51.609033   33429 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/apiserver.crt.e7c27cbf ...
	I0815 00:05:51.609058   33429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/apiserver.crt.e7c27cbf: {Name:mk6703eb6edd26daf5046bd4ca2b634b9cafdd88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:05:51.609196   33429 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/apiserver.key.e7c27cbf ...
	I0815 00:05:51.609208   33429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/apiserver.key.e7c27cbf: {Name:mk478e8492cd5c7d56e515385c8a0a37e3aba211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:05:51.609275   33429 certs.go:381] copying /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/apiserver.crt.e7c27cbf -> /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/apiserver.crt
	I0815 00:05:51.609363   33429 certs.go:385] copying /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/apiserver.key.e7c27cbf -> /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/apiserver.key
	I0815 00:05:51.609426   33429 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/proxy-client.key
	I0815 00:05:51.609444   33429 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/proxy-client.crt with IP's: []
	I0815 00:05:51.900454   33429 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/proxy-client.crt ...
	I0815 00:05:51.900483   33429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/proxy-client.crt: {Name:mkc962b237253f5c62e68e3c76301d6fa0e4fa6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:05:51.900657   33429 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/proxy-client.key ...
	I0815 00:05:51.900668   33429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/proxy-client.key: {Name:mk276eb8609a41c9cf483090c2f7a4fd7e3e1b33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:05:51.900838   33429 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-25263/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 00:05:51.900870   33429 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-25263/.minikube/certs/ca.pem (1078 bytes)
	I0815 00:05:51.900893   33429 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-25263/.minikube/certs/cert.pem (1123 bytes)
	I0815 00:05:51.900916   33429 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-25263/.minikube/certs/key.pem (1675 bytes)
	I0815 00:05:51.901483   33429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-25263/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 00:05:51.921595   33429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-25263/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 00:05:51.940717   33429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-25263/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 00:05:51.960157   33429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-25263/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 00:05:51.979624   33429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0815 00:05:51.998486   33429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 00:05:52.017320   33429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 00:05:52.037272   33429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0815 00:05:52.056417   33429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-25263/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 00:05:52.076144   33429 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 00:05:52.090393   33429 ssh_runner.go:195] Run: openssl version
	I0815 00:05:52.094916   33429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 00:05:52.102405   33429 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 00:05:52.105121   33429 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 00:05 /usr/share/ca-certificates/minikubeCA.pem
	I0815 00:05:52.105164   33429 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 00:05:52.110939   33429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 00:05:52.118348   33429 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 00:05:52.120909   33429 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0815 00:05:52.120944   33429 kubeadm.go:392] StartCluster: {Name:addons-877132 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-877132 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 00:05:52.121035   33429 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 00:05:52.121078   33429 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 00:05:52.150788   33429 cri.go:89] found id: ""
	I0815 00:05:52.150851   33429 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 00:05:52.158002   33429 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 00:05:52.165020   33429 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0815 00:05:52.165057   33429 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 00:05:52.172493   33429 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 00:05:52.172506   33429 kubeadm.go:157] found existing configuration files:
	
	I0815 00:05:52.172543   33429 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 00:05:52.179306   33429 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 00:05:52.179343   33429 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 00:05:52.186501   33429 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 00:05:52.193388   33429 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 00:05:52.193429   33429 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 00:05:52.200229   33429 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 00:05:52.207771   33429 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 00:05:52.207840   33429 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 00:05:52.214934   33429 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 00:05:52.222802   33429 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 00:05:52.222864   33429 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 00:05:52.229685   33429 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0815 00:05:52.260389   33429 kubeadm.go:310] W0815 00:05:52.259734    1303 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 00:05:52.260821   33429 kubeadm.go:310] W0815 00:05:52.260363    1303 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 00:05:52.276476   33429 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1066-gcp\n", err: exit status 1
	I0815 00:05:52.324462   33429 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 00:06:00.767633   33429 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0815 00:06:00.767703   33429 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 00:06:00.767862   33429 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0815 00:06:00.767927   33429 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1066-gcp
	I0815 00:06:00.767962   33429 kubeadm.go:310] OS: Linux
	I0815 00:06:00.768007   33429 kubeadm.go:310] CGROUPS_CPU: enabled
	I0815 00:06:00.768077   33429 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0815 00:06:00.768149   33429 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0815 00:06:00.768219   33429 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0815 00:06:00.768289   33429 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0815 00:06:00.768359   33429 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0815 00:06:00.768410   33429 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0815 00:06:00.768473   33429 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0815 00:06:00.768532   33429 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0815 00:06:00.768655   33429 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 00:06:00.768793   33429 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 00:06:00.768925   33429 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0815 00:06:00.769001   33429 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 00:06:00.770536   33429 out.go:204]   - Generating certificates and keys ...
	I0815 00:06:00.770633   33429 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 00:06:00.770715   33429 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 00:06:00.770788   33429 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0815 00:06:00.770862   33429 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0815 00:06:00.770939   33429 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0815 00:06:00.771012   33429 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0815 00:06:00.771100   33429 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0815 00:06:00.771216   33429 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-877132 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0815 00:06:00.771279   33429 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0815 00:06:00.771436   33429 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-877132 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0815 00:06:00.771528   33429 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0815 00:06:00.771617   33429 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0815 00:06:00.771655   33429 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0815 00:06:00.771707   33429 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 00:06:00.771747   33429 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 00:06:00.771799   33429 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0815 00:06:00.771847   33429 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 00:06:00.771896   33429 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 00:06:00.771941   33429 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 00:06:00.772003   33429 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 00:06:00.772075   33429 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 00:06:00.773209   33429 out.go:204]   - Booting up control plane ...
	I0815 00:06:00.773295   33429 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 00:06:00.773364   33429 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 00:06:00.773424   33429 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 00:06:00.773510   33429 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 00:06:00.773602   33429 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 00:06:00.773645   33429 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 00:06:00.773767   33429 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0815 00:06:00.773912   33429 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0815 00:06:00.773971   33429 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.387534ms
	I0815 00:06:00.774033   33429 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0815 00:06:00.774089   33429 kubeadm.go:310] [api-check] The API server is healthy after 4.001373443s
	I0815 00:06:00.774175   33429 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0815 00:06:00.774282   33429 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0815 00:06:00.774335   33429 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0815 00:06:00.774487   33429 kubeadm.go:310] [mark-control-plane] Marking the node addons-877132 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0815 00:06:00.774541   33429 kubeadm.go:310] [bootstrap-token] Using token: 9cd728.sstuwlg203zlj5vt
	I0815 00:06:00.775824   33429 out.go:204]   - Configuring RBAC rules ...
	I0815 00:06:00.775911   33429 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0815 00:06:00.775980   33429 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0815 00:06:00.776107   33429 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0815 00:06:00.776230   33429 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0815 00:06:00.776336   33429 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0815 00:06:00.776409   33429 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0815 00:06:00.776498   33429 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0815 00:06:00.776540   33429 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0815 00:06:00.776577   33429 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0815 00:06:00.776582   33429 kubeadm.go:310] 
	I0815 00:06:00.776628   33429 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0815 00:06:00.776633   33429 kubeadm.go:310] 
	I0815 00:06:00.776733   33429 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0815 00:06:00.776748   33429 kubeadm.go:310] 
	I0815 00:06:00.776790   33429 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0815 00:06:00.776837   33429 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0815 00:06:00.776884   33429 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0815 00:06:00.776897   33429 kubeadm.go:310] 
	I0815 00:06:00.776948   33429 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0815 00:06:00.776954   33429 kubeadm.go:310] 
	I0815 00:06:00.777017   33429 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0815 00:06:00.777027   33429 kubeadm.go:310] 
	I0815 00:06:00.777098   33429 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0815 00:06:00.777208   33429 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0815 00:06:00.777297   33429 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0815 00:06:00.777306   33429 kubeadm.go:310] 
	I0815 00:06:00.777383   33429 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0815 00:06:00.777447   33429 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0815 00:06:00.777453   33429 kubeadm.go:310] 
	I0815 00:06:00.777520   33429 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 9cd728.sstuwlg203zlj5vt \
	I0815 00:06:00.777619   33429 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0aaee585d8cab38ae3fe05542b0fa84d163b2d1c3df394dbd390896caee3c485 \
	I0815 00:06:00.777641   33429 kubeadm.go:310] 	--control-plane 
	I0815 00:06:00.777647   33429 kubeadm.go:310] 
	I0815 00:06:00.777711   33429 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0815 00:06:00.777716   33429 kubeadm.go:310] 
	I0815 00:06:00.777805   33429 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 9cd728.sstuwlg203zlj5vt \
	I0815 00:06:00.777934   33429 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0aaee585d8cab38ae3fe05542b0fa84d163b2d1c3df394dbd390896caee3c485 
	I0815 00:06:00.777944   33429 cni.go:84] Creating CNI manager for ""
	I0815 00:06:00.777950   33429 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0815 00:06:00.779348   33429 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0815 00:06:00.780465   33429 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0815 00:06:00.783950   33429 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0815 00:06:00.783963   33429 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0815 00:06:00.799808   33429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0815 00:06:00.977777   33429 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 00:06:00.977867   33429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:06:00.977880   33429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-877132 minikube.k8s.io/updated_at=2024_08_15T00_06_00_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168 minikube.k8s.io/name=addons-877132 minikube.k8s.io/primary=true
	I0815 00:06:00.984880   33429 ops.go:34] apiserver oom_adj: -16
	I0815 00:06:01.066466   33429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:06:01.567517   33429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:06:02.066972   33429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:06:02.567491   33429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:06:03.067064   33429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:06:03.566958   33429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:06:04.066976   33429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:06:04.567486   33429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:06:05.067005   33429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:06:05.567422   33429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:06:05.627271   33429 kubeadm.go:1113] duration metric: took 4.649454362s to wait for elevateKubeSystemPrivileges
	I0815 00:06:05.627300   33429 kubeadm.go:394] duration metric: took 13.506358206s to StartCluster
	I0815 00:06:05.627317   33429 settings.go:142] acquiring lock: {Name:mk24702fc665a6ffc1bd2280cb721c81d58ddde1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:06:05.627422   33429 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19443-25263/kubeconfig
	I0815 00:06:05.627782   33429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-25263/kubeconfig: {Name:mk5a4aa2b57f058fc0dbb1196c79fd5fb38108bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:06:05.627943   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0815 00:06:05.627954   33429 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 00:06:05.628018   33429 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0815 00:06:05.628156   33429 config.go:182] Loaded profile config "addons-877132": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:06:05.628201   33429 addons.go:69] Setting cloud-spanner=true in profile "addons-877132"
	I0815 00:06:05.628254   33429 addons.go:234] Setting addon cloud-spanner=true in "addons-877132"
	I0815 00:06:05.628288   33429 host.go:66] Checking if "addons-877132" exists ...
	I0815 00:06:05.628202   33429 addons.go:69] Setting volumesnapshots=true in profile "addons-877132"
	I0815 00:06:05.628342   33429 addons.go:234] Setting addon volumesnapshots=true in "addons-877132"
	I0815 00:06:05.628369   33429 host.go:66] Checking if "addons-877132" exists ...
	I0815 00:06:05.628165   33429 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-877132"
	I0815 00:06:05.628437   33429 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-877132"
	I0815 00:06:05.628459   33429 host.go:66] Checking if "addons-877132" exists ...
	I0815 00:06:05.628174   33429 addons.go:69] Setting registry=true in profile "addons-877132"
	I0815 00:06:05.628560   33429 addons.go:234] Setting addon registry=true in "addons-877132"
	I0815 00:06:05.628601   33429 host.go:66] Checking if "addons-877132" exists ...
	I0815 00:06:05.628177   33429 addons.go:69] Setting metrics-server=true in profile "addons-877132"
	I0815 00:06:05.628697   33429 addons.go:234] Setting addon metrics-server=true in "addons-877132"
	I0815 00:06:05.628730   33429 host.go:66] Checking if "addons-877132" exists ...
	I0815 00:06:05.628818   33429 cli_runner.go:164] Run: docker container inspect addons-877132 --format={{.State.Status}}
	I0815 00:06:05.628181   33429 addons.go:69] Setting storage-provisioner=true in profile "addons-877132"
	I0815 00:06:05.628836   33429 cli_runner.go:164] Run: docker container inspect addons-877132 --format={{.State.Status}}
	I0815 00:06:05.628853   33429 addons.go:234] Setting addon storage-provisioner=true in "addons-877132"
	I0815 00:06:05.628880   33429 host.go:66] Checking if "addons-877132" exists ...
	I0815 00:06:05.628938   33429 cli_runner.go:164] Run: docker container inspect addons-877132 --format={{.State.Status}}
	I0815 00:06:05.629027   33429 cli_runner.go:164] Run: docker container inspect addons-877132 --format={{.State.Status}}
	I0815 00:06:05.629163   33429 cli_runner.go:164] Run: docker container inspect addons-877132 --format={{.State.Status}}
	I0815 00:06:05.629295   33429 cli_runner.go:164] Run: docker container inspect addons-877132 --format={{.State.Status}}
	I0815 00:06:05.628176   33429 addons.go:69] Setting ingress-dns=true in profile "addons-877132"
	I0815 00:06:05.629708   33429 addons.go:234] Setting addon ingress-dns=true in "addons-877132"
	I0815 00:06:05.629750   33429 host.go:66] Checking if "addons-877132" exists ...
	I0815 00:06:05.630183   33429 cli_runner.go:164] Run: docker container inspect addons-877132 --format={{.State.Status}}
	I0815 00:06:05.631193   33429 out.go:177] * Verifying Kubernetes components...
	I0815 00:06:05.632576   33429 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 00:06:05.628189   33429 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-877132"
	I0815 00:06:05.632713   33429 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-877132"
	I0815 00:06:05.632998   33429 cli_runner.go:164] Run: docker container inspect addons-877132 --format={{.State.Status}}
	I0815 00:06:05.628188   33429 addons.go:69] Setting helm-tiller=true in profile "addons-877132"
	I0815 00:06:05.633347   33429 addons.go:234] Setting addon helm-tiller=true in "addons-877132"
	I0815 00:06:05.628192   33429 addons.go:69] Setting ingress=true in profile "addons-877132"
	I0815 00:06:05.633495   33429 addons.go:234] Setting addon ingress=true in "addons-877132"
	I0815 00:06:05.633553   33429 host.go:66] Checking if "addons-877132" exists ...
	I0815 00:06:05.633625   33429 host.go:66] Checking if "addons-877132" exists ...
	I0815 00:06:05.628183   33429 addons.go:69] Setting inspektor-gadget=true in profile "addons-877132"
	I0815 00:06:05.634070   33429 addons.go:234] Setting addon inspektor-gadget=true in "addons-877132"
	I0815 00:06:05.634105   33429 host.go:66] Checking if "addons-877132" exists ...
	I0815 00:06:05.634517   33429 cli_runner.go:164] Run: docker container inspect addons-877132 --format={{.State.Status}}
	I0815 00:06:05.628196   33429 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-877132"
	I0815 00:06:05.636522   33429 cli_runner.go:164] Run: docker container inspect addons-877132 --format={{.State.Status}}
	I0815 00:06:05.636547   33429 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-877132"
	I0815 00:06:05.636607   33429 host.go:66] Checking if "addons-877132" exists ...
	I0815 00:06:05.636740   33429 cli_runner.go:164] Run: docker container inspect addons-877132 --format={{.State.Status}}
	I0815 00:06:05.628163   33429 addons.go:69] Setting yakd=true in profile "addons-877132"
	I0815 00:06:05.637075   33429 addons.go:234] Setting addon yakd=true in "addons-877132"
	I0815 00:06:05.628197   33429 addons.go:69] Setting gcp-auth=true in profile "addons-877132"
	I0815 00:06:05.637104   33429 host.go:66] Checking if "addons-877132" exists ...
	I0815 00:06:05.637110   33429 mustload.go:65] Loading cluster: addons-877132
	I0815 00:06:05.637330   33429 config.go:182] Loaded profile config "addons-877132": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:06:05.637534   33429 cli_runner.go:164] Run: docker container inspect addons-877132 --format={{.State.Status}}
	I0815 00:06:05.637642   33429 cli_runner.go:164] Run: docker container inspect addons-877132 --format={{.State.Status}}
	I0815 00:06:05.628213   33429 addons.go:69] Setting default-storageclass=true in profile "addons-877132"
	I0815 00:06:05.638167   33429 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-877132"
	I0815 00:06:05.628218   33429 addons.go:69] Setting volcano=true in profile "addons-877132"
	I0815 00:06:05.643982   33429 addons.go:234] Setting addon volcano=true in "addons-877132"
	I0815 00:06:05.637044   33429 cli_runner.go:164] Run: docker container inspect addons-877132 --format={{.State.Status}}
	I0815 00:06:05.646007   33429 host.go:66] Checking if "addons-877132" exists ...
	I0815 00:06:05.666042   33429 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0815 00:06:05.666184   33429 cli_runner.go:164] Run: docker container inspect addons-877132 --format={{.State.Status}}
	I0815 00:06:05.667416   33429 cli_runner.go:164] Run: docker container inspect addons-877132 --format={{.State.Status}}
	I0815 00:06:05.668094   33429 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0815 00:06:05.668112   33429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0815 00:06:05.668158   33429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877132
	I0815 00:06:05.669804   33429 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0815 00:06:05.673019   33429 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0815 00:06:05.673079   33429 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0815 00:06:05.673166   33429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877132
	I0815 00:06:05.679897   33429 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0815 00:06:05.679941   33429 out.go:177]   - Using image docker.io/registry:2.8.3
	I0815 00:06:05.681192   33429 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 00:06:05.681415   33429 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0815 00:06:05.681428   33429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0815 00:06:05.681478   33429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877132
	I0815 00:06:05.682634   33429 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 00:06:05.682649   33429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 00:06:05.682697   33429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877132
	I0815 00:06:05.682859   33429 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0815 00:06:05.684119   33429 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0815 00:06:05.684135   33429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0815 00:06:05.684175   33429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877132
	I0815 00:06:05.693193   33429 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0815 00:06:05.693193   33429 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0815 00:06:05.694564   33429 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0815 00:06:05.694595   33429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0815 00:06:05.694652   33429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877132
	I0815 00:06:05.696426   33429 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0815 00:06:05.697529   33429 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0815 00:06:05.699629   33429 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0815 00:06:05.700079   33429 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0815 00:06:05.700096   33429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0815 00:06:05.700247   33429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877132
	I0815 00:06:05.701053   33429 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0815 00:06:05.701069   33429 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0815 00:06:05.701119   33429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877132
	I0815 00:06:05.726356   33429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19443-25263/.minikube/machines/addons-877132/id_rsa Username:docker}
	I0815 00:06:05.727572   33429 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	W0815 00:06:05.729466   33429 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0815 00:06:05.734048   33429 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0815 00:06:05.734072   33429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0815 00:06:05.734131   33429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877132
	I0815 00:06:05.739495   33429 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0815 00:06:05.740707   33429 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0815 00:06:05.740722   33429 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0815 00:06:05.740772   33429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877132
	I0815 00:06:05.742643   33429 host.go:66] Checking if "addons-877132" exists ...
	I0815 00:06:05.746915   33429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19443-25263/.minikube/machines/addons-877132/id_rsa Username:docker}
	I0815 00:06:05.752420   33429 addons.go:234] Setting addon default-storageclass=true in "addons-877132"
	I0815 00:06:05.752463   33429 host.go:66] Checking if "addons-877132" exists ...
	I0815 00:06:05.752930   33429 cli_runner.go:164] Run: docker container inspect addons-877132 --format={{.State.Status}}
	I0815 00:06:05.756364   33429 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-877132"
	I0815 00:06:05.756407   33429 host.go:66] Checking if "addons-877132" exists ...
	I0815 00:06:05.756866   33429 cli_runner.go:164] Run: docker container inspect addons-877132 --format={{.State.Status}}
	I0815 00:06:05.764150   33429 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0815 00:06:05.769890   33429 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0815 00:06:05.769911   33429 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0815 00:06:05.769965   33429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877132
	I0815 00:06:05.771126   33429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19443-25263/.minikube/machines/addons-877132/id_rsa Username:docker}
	I0815 00:06:05.771429   33429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19443-25263/.minikube/machines/addons-877132/id_rsa Username:docker}
	I0815 00:06:05.772693   33429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19443-25263/.minikube/machines/addons-877132/id_rsa Username:docker}
	I0815 00:06:05.774045   33429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19443-25263/.minikube/machines/addons-877132/id_rsa Username:docker}
	I0815 00:06:05.785888   33429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19443-25263/.minikube/machines/addons-877132/id_rsa Username:docker}
	I0815 00:06:05.791410   33429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19443-25263/.minikube/machines/addons-877132/id_rsa Username:docker}
	I0815 00:06:05.793006   33429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19443-25263/.minikube/machines/addons-877132/id_rsa Username:docker}
	I0815 00:06:05.801917   33429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19443-25263/.minikube/machines/addons-877132/id_rsa Username:docker}
	I0815 00:06:05.801923   33429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19443-25263/.minikube/machines/addons-877132/id_rsa Username:docker}
	I0815 00:06:05.803826   33429 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0815 00:06:05.805212   33429 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0815 00:06:05.806397   33429 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0815 00:06:05.807467   33429 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0815 00:06:05.808848   33429 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0815 00:06:05.810186   33429 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 00:06:05.810207   33429 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 00:06:05.810247   33429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877132
	I0815 00:06:05.810340   33429 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0815 00:06:05.811519   33429 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0815 00:06:05.812764   33429 out.go:177]   - Using image docker.io/busybox:stable
	I0815 00:06:05.813852   33429 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0815 00:06:05.813940   33429 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0815 00:06:05.813956   33429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0815 00:06:05.813991   33429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877132
	I0815 00:06:05.816128   33429 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0815 00:06:05.817177   33429 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0815 00:06:05.817189   33429 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0815 00:06:05.817233   33429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877132
	I0815 00:06:05.830418   33429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19443-25263/.minikube/machines/addons-877132/id_rsa Username:docker}
	I0815 00:06:05.830537   33429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19443-25263/.minikube/machines/addons-877132/id_rsa Username:docker}
	I0815 00:06:05.832478   33429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19443-25263/.minikube/machines/addons-877132/id_rsa Username:docker}
	W0815 00:06:05.861368   33429 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0815 00:06:05.861400   33429 retry.go:31] will retry after 244.442357ms: ssh: handshake failed: EOF
	W0815 00:06:05.861473   33429 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0815 00:06:05.861481   33429 retry.go:31] will retry after 180.613371ms: ssh: handshake failed: EOF
	I0815 00:06:05.878964   33429 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 00:06:05.879077   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0815 00:06:06.077440   33429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 00:06:06.170081   33429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0815 00:06:06.174192   33429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0815 00:06:06.178934   33429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0815 00:06:06.271046   33429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0815 00:06:06.278098   33429 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0815 00:06:06.278121   33429 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0815 00:06:06.356678   33429 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0815 00:06:06.356706   33429 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0815 00:06:06.359353   33429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 00:06:06.455385   33429 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0815 00:06:06.455472   33429 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0815 00:06:06.474571   33429 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0815 00:06:06.474654   33429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0815 00:06:06.554397   33429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0815 00:06:06.566563   33429 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0815 00:06:06.566657   33429 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0815 00:06:06.656051   33429 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0815 00:06:06.656137   33429 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0815 00:06:06.656413   33429 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0815 00:06:06.656465   33429 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0815 00:06:06.673757   33429 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0815 00:06:06.673805   33429 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0815 00:06:06.677017   33429 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0815 00:06:06.677039   33429 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0815 00:06:06.773033   33429 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0815 00:06:06.773062   33429 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0815 00:06:06.860223   33429 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0815 00:06:06.860300   33429 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0815 00:06:06.860566   33429 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0815 00:06:06.860609   33429 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0815 00:06:06.868145   33429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0815 00:06:06.960420   33429 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0815 00:06:06.960448   33429 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0815 00:06:07.055232   33429 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 00:06:07.055261   33429 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0815 00:06:07.058115   33429 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0815 00:06:07.058143   33429 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0815 00:06:07.154264   33429 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0815 00:06:07.154294   33429 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0815 00:06:07.155257   33429 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0815 00:06:07.155277   33429 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0815 00:06:07.374343   33429 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0815 00:06:07.374368   33429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0815 00:06:07.374783   33429 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0815 00:06:07.374806   33429 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0815 00:06:07.455705   33429 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0815 00:06:07.455728   33429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0815 00:06:07.456207   33429 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0815 00:06:07.456223   33429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0815 00:06:07.568863   33429 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0815 00:06:07.568893   33429 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0815 00:06:07.569574   33429 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0815 00:06:07.569592   33429 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0815 00:06:07.575132   33429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 00:06:07.659030   33429 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.779918509s)
	I0815 00:06:07.659192   33429 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0815 00:06:07.659130   33429 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.780140591s)
	I0815 00:06:07.660310   33429 node_ready.go:35] waiting up to 6m0s for node "addons-877132" to be "Ready" ...
	I0815 00:06:07.757557   33429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0815 00:06:07.770064   33429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0815 00:06:07.867105   33429 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0815 00:06:07.867183   33429 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0815 00:06:07.965723   33429 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0815 00:06:07.965749   33429 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0815 00:06:08.058081   33429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0815 00:06:08.360078   33429 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0815 00:06:08.360151   33429 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0815 00:06:08.367634   33429 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0815 00:06:08.367702   33429 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0815 00:06:08.378974   33429 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-877132" context rescaled to 1 replicas
	I0815 00:06:08.672860   33429 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0815 00:06:08.672881   33429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0815 00:06:08.676306   33429 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0815 00:06:08.676368   33429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0815 00:06:08.970098   33429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0815 00:06:08.970391   33429 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0815 00:06:08.970437   33429 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0815 00:06:09.362832   33429 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0815 00:06:09.362917   33429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0815 00:06:09.670303   33429 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0815 00:06:09.670329   33429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0815 00:06:09.675093   33429 node_ready.go:53] node "addons-877132" has status "Ready":"False"
	I0815 00:06:09.955187   33429 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0815 00:06:09.955258   33429 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0815 00:06:10.169704   33429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0815 00:06:10.455264   33429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.377776639s)
	I0815 00:06:10.455433   33429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.285322408s)
	I0815 00:06:10.455482   33429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.281226704s)
	I0815 00:06:12.160045   33429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.981063357s)
	I0815 00:06:12.160085   33429 addons.go:475] Verifying addon ingress=true in "addons-877132"
	I0815 00:06:12.160118   33429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.889032263s)
	I0815 00:06:12.160212   33429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.800835507s)
	I0815 00:06:12.160264   33429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.605841296s)
	I0815 00:06:12.160307   33429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (5.292133359s)
	I0815 00:06:12.160370   33429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.585212618s)
	I0815 00:06:12.160706   33429 addons.go:475] Verifying addon metrics-server=true in "addons-877132"
	I0815 00:06:12.162800   33429 out.go:177] * Verifying ingress addon...
	I0815 00:06:12.164520   33429 node_ready.go:53] node "addons-877132" has status "Ready":"False"
	I0815 00:06:12.166394   33429 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	W0815 00:06:12.170677   33429 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0815 00:06:12.177055   33429 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0815 00:06:12.177077   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:12.670053   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:12.964117   33429 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0815 00:06:12.964195   33429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877132
	I0815 00:06:12.990306   33429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19443-25263/.minikube/machines/addons-877132/id_rsa Username:docker}
	I0815 00:06:13.090320   33429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.33266703s)
	W0815 00:06:13.090355   33429 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0815 00:06:13.090375   33429 retry.go:31] will retry after 175.622541ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0815 00:06:13.090390   33429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.320236356s)
	I0815 00:06:13.090435   33429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.032315229s)
	I0815 00:06:13.090462   33429 addons.go:475] Verifying addon registry=true in "addons-877132"
	I0815 00:06:13.090501   33429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.120304899s)
	I0815 00:06:13.091944   33429 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-877132 service yakd-dashboard -n yakd-dashboard
	
	I0815 00:06:13.091950   33429 out.go:177] * Verifying registry addon...
	I0815 00:06:13.093755   33429 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0815 00:06:13.157110   33429 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0815 00:06:13.157140   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:13.171785   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:13.256795   33429 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0815 00:06:13.266211   33429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0815 00:06:13.275869   33429 addons.go:234] Setting addon gcp-auth=true in "addons-877132"
	I0815 00:06:13.275940   33429 host.go:66] Checking if "addons-877132" exists ...
	I0815 00:06:13.276428   33429 cli_runner.go:164] Run: docker container inspect addons-877132 --format={{.State.Status}}
	I0815 00:06:13.297887   33429 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0815 00:06:13.297942   33429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877132
	I0815 00:06:13.314684   33429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19443-25263/.minikube/machines/addons-877132/id_rsa Username:docker}
	I0815 00:06:13.658383   33429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.488585678s)
	I0815 00:06:13.658424   33429 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-877132"
	I0815 00:06:13.658651   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:13.659795   33429 out.go:177] * Verifying csi-hostpath-driver addon...
	I0815 00:06:13.662216   33429 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0815 00:06:13.666005   33429 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0815 00:06:13.666029   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:13.668835   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:14.155522   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:14.165718   33429 node_ready.go:53] node "addons-877132" has status "Ready":"False"
	I0815 00:06:14.166425   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:14.169249   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:14.596258   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:14.664609   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:14.670036   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:15.097283   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:15.166093   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:15.169339   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:15.596326   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:15.665152   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:15.669647   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:16.096862   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:16.165864   33429 node_ready.go:53] node "addons-877132" has status "Ready":"False"
	I0815 00:06:16.166340   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:16.196644   33429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.930396544s)
	I0815 00:06:16.196703   33429 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.898786484s)
	I0815 00:06:16.198662   33429 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0815 00:06:16.198680   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:16.201338   33429 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0815 00:06:16.202541   33429 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0815 00:06:16.202556   33429 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0815 00:06:16.219803   33429 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0815 00:06:16.219831   33429 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0815 00:06:16.267002   33429 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0815 00:06:16.267071   33429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0815 00:06:16.283505   33429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0815 00:06:16.596842   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:16.665282   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:16.670150   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:16.805955   33429 addons.go:475] Verifying addon gcp-auth=true in "addons-877132"
	I0815 00:06:16.807303   33429 out.go:177] * Verifying gcp-auth addon...
	I0815 00:06:16.809043   33429 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0815 00:06:16.811299   33429 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0815 00:06:16.811318   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:17.096734   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:17.165013   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:17.168982   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:17.311617   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:17.597189   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:17.665310   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:17.669469   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:17.811621   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:18.097460   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:18.165631   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:18.169545   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:18.311265   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:18.597070   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:18.664448   33429 node_ready.go:53] node "addons-877132" has status "Ready":"False"
	I0815 00:06:18.697878   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:18.698131   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:18.811809   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:19.097224   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:19.165165   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:19.169296   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:19.317463   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:19.597102   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:19.665308   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:19.669472   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:19.812377   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:20.096809   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:20.165218   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:20.169284   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:20.312161   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:20.596603   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:20.665074   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:20.669058   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:20.812223   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:21.096596   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:21.165086   33429 node_ready.go:53] node "addons-877132" has status "Ready":"False"
	I0815 00:06:21.165136   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:21.168833   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:21.319500   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:21.596822   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:21.665152   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:21.669257   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:21.812305   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:22.096799   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:22.165049   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:22.168933   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:22.311831   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:22.596265   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:22.664599   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:22.669444   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:22.811399   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:23.096811   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:23.165124   33429 node_ready.go:53] node "addons-877132" has status "Ready":"False"
	I0815 00:06:23.165243   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:23.169359   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:23.312662   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:23.597142   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:23.664739   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:23.669884   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:23.811958   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:24.096209   33429 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0815 00:06:24.096232   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:24.164915   33429 node_ready.go:49] node "addons-877132" has status "Ready":"True"
	I0815 00:06:24.164938   33429 node_ready.go:38] duration metric: took 16.503624973s for node "addons-877132" to be "Ready" ...
	I0815 00:06:24.164955   33429 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 00:06:24.166049   33429 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0815 00:06:24.166068   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:24.170142   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:24.173429   33429 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-c42pc" in "kube-system" namespace to be "Ready" ...
	I0815 00:06:24.355959   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:24.597389   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:24.666628   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:24.669410   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:24.812130   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:25.096547   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:25.167043   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:25.169602   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:25.355288   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:25.597426   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:25.666971   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:25.670355   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:25.678083   33429 pod_ready.go:92] pod "coredns-6f6b679f8f-c42pc" in "kube-system" namespace has status "Ready":"True"
	I0815 00:06:25.678106   33429 pod_ready.go:81] duration metric: took 1.504654703s for pod "coredns-6f6b679f8f-c42pc" in "kube-system" namespace to be "Ready" ...
	I0815 00:06:25.678133   33429 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-877132" in "kube-system" namespace to be "Ready" ...
	I0815 00:06:25.682037   33429 pod_ready.go:92] pod "etcd-addons-877132" in "kube-system" namespace has status "Ready":"True"
	I0815 00:06:25.682055   33429 pod_ready.go:81] duration metric: took 3.913671ms for pod "etcd-addons-877132" in "kube-system" namespace to be "Ready" ...
	I0815 00:06:25.682078   33429 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-877132" in "kube-system" namespace to be "Ready" ...
	I0815 00:06:25.686074   33429 pod_ready.go:92] pod "kube-apiserver-addons-877132" in "kube-system" namespace has status "Ready":"True"
	I0815 00:06:25.686092   33429 pod_ready.go:81] duration metric: took 4.003183ms for pod "kube-apiserver-addons-877132" in "kube-system" namespace to be "Ready" ...
	I0815 00:06:25.686104   33429 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-877132" in "kube-system" namespace to be "Ready" ...
	I0815 00:06:25.690123   33429 pod_ready.go:92] pod "kube-controller-manager-addons-877132" in "kube-system" namespace has status "Ready":"True"
	I0815 00:06:25.690142   33429 pod_ready.go:81] duration metric: took 4.029781ms for pod "kube-controller-manager-addons-877132" in "kube-system" namespace to be "Ready" ...
	I0815 00:06:25.690157   33429 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v6kx7" in "kube-system" namespace to be "Ready" ...
	I0815 00:06:25.764591   33429 pod_ready.go:92] pod "kube-proxy-v6kx7" in "kube-system" namespace has status "Ready":"True"
	I0815 00:06:25.764670   33429 pod_ready.go:81] duration metric: took 74.503022ms for pod "kube-proxy-v6kx7" in "kube-system" namespace to be "Ready" ...
	I0815 00:06:25.764686   33429 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-877132" in "kube-system" namespace to be "Ready" ...
	I0815 00:06:25.812299   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:26.097806   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:26.169194   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:26.169487   33429 pod_ready.go:92] pod "kube-scheduler-addons-877132" in "kube-system" namespace has status "Ready":"True"
	I0815 00:06:26.169514   33429 pod_ready.go:81] duration metric: took 404.819415ms for pod "kube-scheduler-addons-877132" in "kube-system" namespace to be "Ready" ...
	I0815 00:06:26.169539   33429 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace to be "Ready" ...
	I0815 00:06:26.172362   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:26.312540   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:26.597942   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:26.666295   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:26.670387   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:26.812404   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:27.097376   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:27.167501   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:27.169841   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:27.312952   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:27.599500   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:27.666954   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:27.669769   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:27.812771   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:28.097661   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:28.167011   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:28.169733   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:28.174188   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:06:28.312769   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:28.597722   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:28.666848   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:28.669482   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:28.812800   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:29.098209   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:29.167180   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:29.169890   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:29.312700   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:29.597754   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:29.665572   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:29.669489   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:29.812665   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:30.157157   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:30.166642   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:30.177519   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:30.180207   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:06:30.356588   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:30.597957   33429 kapi.go:107] duration metric: took 17.504196925s to wait for kubernetes.io/minikube-addons=registry ...
	I0815 00:06:30.666243   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:30.670815   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:30.813007   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:31.167613   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:31.169328   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:31.313181   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:31.666720   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:31.669434   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:31.811910   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:32.166506   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:32.169062   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:32.311776   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:32.666775   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:32.670042   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:32.674328   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:06:32.811997   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:33.169603   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:33.170197   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:33.356708   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:33.666304   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:33.670392   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:33.812677   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:34.167381   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:34.170171   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:34.312544   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:34.666532   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:34.669482   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:34.812211   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:35.167257   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:35.169456   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:35.173423   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:06:35.312015   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:35.666706   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:35.669857   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:35.812364   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:36.166598   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:36.169053   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:36.312373   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:36.667821   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:36.671196   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:36.857237   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:37.169468   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:37.170919   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:37.175323   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:06:37.355970   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:37.666841   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:37.670268   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:37.812428   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:38.167080   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:38.170187   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:38.312594   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:38.666054   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:38.669825   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:38.812021   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:39.166241   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:39.267161   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:39.311799   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:39.667701   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:39.670267   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:39.674734   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:06:39.812349   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:40.168015   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:40.169594   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:40.312432   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:40.665674   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:40.669752   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:40.812537   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:41.167876   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:41.169598   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:41.312173   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:41.666948   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:41.670803   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:41.812414   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:42.166289   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:42.169867   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:42.173537   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:06:42.311882   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:42.667753   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:42.670676   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:42.812295   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:43.169618   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:43.169854   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:43.313320   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:43.666307   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:43.670383   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:43.812476   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:44.167127   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:44.170013   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:44.174233   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:06:44.311846   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:44.667126   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:44.669867   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:44.855643   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:45.167763   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:45.170016   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:45.313417   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:45.666129   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:45.670059   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:45.813091   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:46.166678   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:46.169893   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:46.312450   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:46.665829   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:46.670061   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:46.674101   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:06:46.812249   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:47.169598   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:47.169608   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:47.312248   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:47.666873   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:47.669374   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:47.812134   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:48.167158   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:48.170215   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:48.312747   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:48.666203   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:48.670026   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:48.812154   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:49.166461   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:49.169165   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:49.174184   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:06:49.312823   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:49.666717   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:49.669712   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:49.812030   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:50.166358   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:50.170069   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:50.312131   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:50.666409   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:50.669159   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:50.811804   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:51.167643   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:51.170383   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:51.174565   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:06:51.357329   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:51.667694   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:51.673988   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:51.855570   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:52.167630   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:52.171705   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:52.357523   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:52.667416   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:52.671989   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:52.856473   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:53.167342   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:53.170225   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:53.357056   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:53.667785   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:53.670341   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:53.675346   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:06:53.812287   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:54.167962   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:54.169368   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:54.312346   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:54.665866   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:54.670543   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:54.812505   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:55.166866   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:55.169578   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:55.312197   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:55.667048   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:55.670233   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:55.812036   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:56.167650   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:56.169862   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:56.173888   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:06:56.312438   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:56.666120   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:56.670307   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:56.811811   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:57.168190   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:57.171201   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:57.313042   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:57.673029   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:57.675625   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:57.813071   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:58.167075   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:58.170091   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:58.175180   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:06:58.312767   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:58.666795   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:58.669391   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:58.812165   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:59.167267   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:59.170177   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:59.312417   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:59.666057   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:59.669690   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:59.811822   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:00.166831   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:00.170224   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:00.312503   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:00.667657   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:00.676413   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:00.767451   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:00.812517   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:01.166638   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:01.169325   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:01.312692   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:01.666198   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:01.669887   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:01.812554   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:02.168126   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:02.169326   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:02.313091   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:02.667880   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:02.669870   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:02.865938   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:03.167117   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:03.176085   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:03.267575   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:03.367374   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:03.666205   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:03.671232   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:03.812560   33429 kapi.go:107] duration metric: took 47.003516074s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0815 00:07:03.814146   33429 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-877132 cluster.
	I0815 00:07:03.815458   33429 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0815 00:07:03.816787   33429 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0815 00:07:04.166733   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:04.170848   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:04.671507   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:04.684842   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:05.166792   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:05.169699   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:05.666612   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:05.669642   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:05.674348   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:06.166989   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:06.169586   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:06.667233   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:06.670146   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:07.166493   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:07.169125   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:07.667067   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:07.670993   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:07.674543   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:08.166585   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:08.169456   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:08.667276   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:08.670520   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:09.166747   33429 kapi.go:107] duration metric: took 55.504525178s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0815 00:07:09.169549   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:09.670367   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:10.170088   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:10.173925   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:10.670285   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:11.169891   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:11.670347   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:12.169423   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:12.174126   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:12.670730   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:13.169706   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:13.670690   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:14.169616   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:14.670409   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:14.675991   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:15.169947   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:15.670905   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:16.170207   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:16.670228   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:17.169428   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:17.173681   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:17.670291   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:18.169163   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:18.670150   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:19.169975   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:19.174116   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:19.768032   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:20.171759   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:20.670635   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:21.170137   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:21.670728   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:21.673692   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:22.169711   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:22.670073   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:23.170251   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:23.670243   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:23.674950   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:24.169467   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:24.670307   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:25.169275   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:25.670140   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:26.170211   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:26.174259   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:26.670923   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:27.169802   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:27.670058   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:28.170245   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:28.180361   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:28.671837   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:29.174730   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:29.671621   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:30.176404   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:30.259231   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:30.671253   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:31.170859   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:31.670796   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:32.170361   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:32.670998   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:32.674300   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:33.170206   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:33.671980   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:34.170420   33429 kapi.go:107] duration metric: took 1m22.004022687s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0815 00:07:34.172004   33429 out.go:177] * Enabled addons: storage-provisioner, ingress-dns, nvidia-device-plugin, cloud-spanner, helm-tiller, metrics-server, default-storageclass, inspektor-gadget, yakd, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I0815 00:07:34.173191   33429 addons.go:510] duration metric: took 1m28.545170819s for enable addons: enabled=[storage-provisioner ingress-dns nvidia-device-plugin cloud-spanner helm-tiller metrics-server default-storageclass inspektor-gadget yakd volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I0815 00:07:35.175895   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:37.674777   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:40.174631   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:42.174721   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:44.674328   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:46.675786   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:49.174408   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:51.674873   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:54.174351   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:56.174565   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:58.174795   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:08:00.175420   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:08:02.674741   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:08:04.674774   33429 pod_ready.go:92] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"True"
	I0815 00:08:04.674806   33429 pod_ready.go:81] duration metric: took 1m38.505250087s for pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace to be "Ready" ...
	I0815 00:08:04.674822   33429 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-6d62n" in "kube-system" namespace to be "Ready" ...
	I0815 00:08:04.678550   33429 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-6d62n" in "kube-system" namespace has status "Ready":"True"
	I0815 00:08:04.678569   33429 pod_ready.go:81] duration metric: took 3.739721ms for pod "nvidia-device-plugin-daemonset-6d62n" in "kube-system" namespace to be "Ready" ...
	I0815 00:08:04.678586   33429 pod_ready.go:38] duration metric: took 1m40.513617774s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 00:08:04.678603   33429 api_server.go:52] waiting for apiserver process to appear ...
	I0815 00:08:04.678630   33429 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 00:08:04.678677   33429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 00:08:04.710676   33429 cri.go:89] found id: "ea70e9f2778e6bfd0482547ef4e30fb6b1e37e3161a709b7913a069a0d6c1249"
	I0815 00:08:04.710700   33429 cri.go:89] found id: ""
	I0815 00:08:04.710708   33429 logs.go:276] 1 containers: [ea70e9f2778e6bfd0482547ef4e30fb6b1e37e3161a709b7913a069a0d6c1249]
	I0815 00:08:04.710757   33429 ssh_runner.go:195] Run: which crictl
	I0815 00:08:04.713725   33429 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 00:08:04.713779   33429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 00:08:04.744311   33429 cri.go:89] found id: "f16f228580088705d9d978dd7020a6f22abb4f154e076a1986297e2d03e0cdec"
	I0815 00:08:04.744335   33429 cri.go:89] found id: ""
	I0815 00:08:04.744345   33429 logs.go:276] 1 containers: [f16f228580088705d9d978dd7020a6f22abb4f154e076a1986297e2d03e0cdec]
	I0815 00:08:04.744387   33429 ssh_runner.go:195] Run: which crictl
	I0815 00:08:04.747394   33429 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 00:08:04.747437   33429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 00:08:04.777949   33429 cri.go:89] found id: "4ba66a3367daf4b53a35259a09c9bd004e02804ed6293ed0baf74ac2ef4f06d3"
	I0815 00:08:04.777966   33429 cri.go:89] found id: ""
	I0815 00:08:04.777973   33429 logs.go:276] 1 containers: [4ba66a3367daf4b53a35259a09c9bd004e02804ed6293ed0baf74ac2ef4f06d3]
	I0815 00:08:04.778010   33429 ssh_runner.go:195] Run: which crictl
	I0815 00:08:04.780902   33429 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 00:08:04.780976   33429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 00:08:04.812184   33429 cri.go:89] found id: "bd77b5ecfadb9e4b0eb677138b067975344fac163e26352db7d5ce14d50ed8f0"
	I0815 00:08:04.812204   33429 cri.go:89] found id: ""
	I0815 00:08:04.812213   33429 logs.go:276] 1 containers: [bd77b5ecfadb9e4b0eb677138b067975344fac163e26352db7d5ce14d50ed8f0]
	I0815 00:08:04.812254   33429 ssh_runner.go:195] Run: which crictl
	I0815 00:08:04.815194   33429 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 00:08:04.815263   33429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 00:08:04.845303   33429 cri.go:89] found id: "e5fd37ee5ee48f313a2bbb8de2bf49949a8ca143f909bf33e1d7a3ca648839a1"
	I0815 00:08:04.845321   33429 cri.go:89] found id: ""
	I0815 00:08:04.845329   33429 logs.go:276] 1 containers: [e5fd37ee5ee48f313a2bbb8de2bf49949a8ca143f909bf33e1d7a3ca648839a1]
	I0815 00:08:04.845367   33429 ssh_runner.go:195] Run: which crictl
	I0815 00:08:04.848510   33429 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 00:08:04.848570   33429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 00:08:04.879573   33429 cri.go:89] found id: "4043a5cc95e0b642f3d43fc29910c36484e438e6f5eb9296ff38dcbd460cb280"
	I0815 00:08:04.879594   33429 cri.go:89] found id: ""
	I0815 00:08:04.879601   33429 logs.go:276] 1 containers: [4043a5cc95e0b642f3d43fc29910c36484e438e6f5eb9296ff38dcbd460cb280]
	I0815 00:08:04.879654   33429 ssh_runner.go:195] Run: which crictl
	I0815 00:08:04.882866   33429 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 00:08:04.882926   33429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 00:08:04.913837   33429 cri.go:89] found id: "17f6bd6dd22c5288ff6a4dc156ad4dd1d32d2895bfc8398e63705bc5613cf677"
	I0815 00:08:04.913859   33429 cri.go:89] found id: ""
	I0815 00:08:04.913866   33429 logs.go:276] 1 containers: [17f6bd6dd22c5288ff6a4dc156ad4dd1d32d2895bfc8398e63705bc5613cf677]
	I0815 00:08:04.913905   33429 ssh_runner.go:195] Run: which crictl
	I0815 00:08:04.917007   33429 logs.go:123] Gathering logs for kube-proxy [e5fd37ee5ee48f313a2bbb8de2bf49949a8ca143f909bf33e1d7a3ca648839a1] ...
	I0815 00:08:04.917030   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e5fd37ee5ee48f313a2bbb8de2bf49949a8ca143f909bf33e1d7a3ca648839a1"
	I0815 00:08:04.947729   33429 logs.go:123] Gathering logs for kindnet [17f6bd6dd22c5288ff6a4dc156ad4dd1d32d2895bfc8398e63705bc5613cf677] ...
	I0815 00:08:04.947755   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17f6bd6dd22c5288ff6a4dc156ad4dd1d32d2895bfc8398e63705bc5613cf677"
	I0815 00:08:04.983589   33429 logs.go:123] Gathering logs for dmesg ...
	I0815 00:08:04.983615   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 00:08:04.995473   33429 logs.go:123] Gathering logs for describe nodes ...
	I0815 00:08:04.995501   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 00:08:05.087662   33429 logs.go:123] Gathering logs for kube-apiserver [ea70e9f2778e6bfd0482547ef4e30fb6b1e37e3161a709b7913a069a0d6c1249] ...
	I0815 00:08:05.087690   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea70e9f2778e6bfd0482547ef4e30fb6b1e37e3161a709b7913a069a0d6c1249"
	I0815 00:08:05.129108   33429 logs.go:123] Gathering logs for coredns [4ba66a3367daf4b53a35259a09c9bd004e02804ed6293ed0baf74ac2ef4f06d3] ...
	I0815 00:08:05.129137   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ba66a3367daf4b53a35259a09c9bd004e02804ed6293ed0baf74ac2ef4f06d3"
	I0815 00:08:05.164587   33429 logs.go:123] Gathering logs for kube-scheduler [bd77b5ecfadb9e4b0eb677138b067975344fac163e26352db7d5ce14d50ed8f0] ...
	I0815 00:08:05.164624   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd77b5ecfadb9e4b0eb677138b067975344fac163e26352db7d5ce14d50ed8f0"
	I0815 00:08:05.203248   33429 logs.go:123] Gathering logs for kubelet ...
	I0815 00:08:05.203273   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 00:08:05.270185   33429 logs.go:123] Gathering logs for etcd [f16f228580088705d9d978dd7020a6f22abb4f154e076a1986297e2d03e0cdec] ...
	I0815 00:08:05.270214   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f16f228580088705d9d978dd7020a6f22abb4f154e076a1986297e2d03e0cdec"
	I0815 00:08:05.317054   33429 logs.go:123] Gathering logs for kube-controller-manager [4043a5cc95e0b642f3d43fc29910c36484e438e6f5eb9296ff38dcbd460cb280] ...
	I0815 00:08:05.317083   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4043a5cc95e0b642f3d43fc29910c36484e438e6f5eb9296ff38dcbd460cb280"
	I0815 00:08:05.370222   33429 logs.go:123] Gathering logs for CRI-O ...
	I0815 00:08:05.370252   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 00:08:05.446168   33429 logs.go:123] Gathering logs for container status ...
	I0815 00:08:05.446204   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 00:08:07.987446   33429 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 00:08:08.000573   33429 api_server.go:72] duration metric: took 2m2.372588715s to wait for apiserver process to appear ...
	I0815 00:08:08.000594   33429 api_server.go:88] waiting for apiserver healthz status ...
	I0815 00:08:08.000627   33429 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 00:08:08.000662   33429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 00:08:08.031934   33429 cri.go:89] found id: "ea70e9f2778e6bfd0482547ef4e30fb6b1e37e3161a709b7913a069a0d6c1249"
	I0815 00:08:08.031958   33429 cri.go:89] found id: ""
	I0815 00:08:08.031967   33429 logs.go:276] 1 containers: [ea70e9f2778e6bfd0482547ef4e30fb6b1e37e3161a709b7913a069a0d6c1249]
	I0815 00:08:08.032018   33429 ssh_runner.go:195] Run: which crictl
	I0815 00:08:08.034976   33429 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 00:08:08.035037   33429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 00:08:08.065162   33429 cri.go:89] found id: "f16f228580088705d9d978dd7020a6f22abb4f154e076a1986297e2d03e0cdec"
	I0815 00:08:08.065186   33429 cri.go:89] found id: ""
	I0815 00:08:08.065194   33429 logs.go:276] 1 containers: [f16f228580088705d9d978dd7020a6f22abb4f154e076a1986297e2d03e0cdec]
	I0815 00:08:08.065236   33429 ssh_runner.go:195] Run: which crictl
	I0815 00:08:08.068160   33429 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 00:08:08.068208   33429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 00:08:08.099502   33429 cri.go:89] found id: "4ba66a3367daf4b53a35259a09c9bd004e02804ed6293ed0baf74ac2ef4f06d3"
	I0815 00:08:08.099523   33429 cri.go:89] found id: ""
	I0815 00:08:08.099531   33429 logs.go:276] 1 containers: [4ba66a3367daf4b53a35259a09c9bd004e02804ed6293ed0baf74ac2ef4f06d3]
	I0815 00:08:08.099578   33429 ssh_runner.go:195] Run: which crictl
	I0815 00:08:08.102636   33429 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 00:08:08.102683   33429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 00:08:08.134129   33429 cri.go:89] found id: "bd77b5ecfadb9e4b0eb677138b067975344fac163e26352db7d5ce14d50ed8f0"
	I0815 00:08:08.134149   33429 cri.go:89] found id: ""
	I0815 00:08:08.134157   33429 logs.go:276] 1 containers: [bd77b5ecfadb9e4b0eb677138b067975344fac163e26352db7d5ce14d50ed8f0]
	I0815 00:08:08.134193   33429 ssh_runner.go:195] Run: which crictl
	I0815 00:08:08.137077   33429 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 00:08:08.137118   33429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 00:08:08.169612   33429 cri.go:89] found id: "e5fd37ee5ee48f313a2bbb8de2bf49949a8ca143f909bf33e1d7a3ca648839a1"
	I0815 00:08:08.169633   33429 cri.go:89] found id: ""
	I0815 00:08:08.169643   33429 logs.go:276] 1 containers: [e5fd37ee5ee48f313a2bbb8de2bf49949a8ca143f909bf33e1d7a3ca648839a1]
	I0815 00:08:08.169693   33429 ssh_runner.go:195] Run: which crictl
	I0815 00:08:08.173000   33429 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 00:08:08.173051   33429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 00:08:08.203461   33429 cri.go:89] found id: "4043a5cc95e0b642f3d43fc29910c36484e438e6f5eb9296ff38dcbd460cb280"
	I0815 00:08:08.203485   33429 cri.go:89] found id: ""
	I0815 00:08:08.203494   33429 logs.go:276] 1 containers: [4043a5cc95e0b642f3d43fc29910c36484e438e6f5eb9296ff38dcbd460cb280]
	I0815 00:08:08.203533   33429 ssh_runner.go:195] Run: which crictl
	I0815 00:08:08.206389   33429 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 00:08:08.206430   33429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 00:08:08.236086   33429 cri.go:89] found id: "17f6bd6dd22c5288ff6a4dc156ad4dd1d32d2895bfc8398e63705bc5613cf677"
	I0815 00:08:08.236109   33429 cri.go:89] found id: ""
	I0815 00:08:08.236119   33429 logs.go:276] 1 containers: [17f6bd6dd22c5288ff6a4dc156ad4dd1d32d2895bfc8398e63705bc5613cf677]
	I0815 00:08:08.236166   33429 ssh_runner.go:195] Run: which crictl
	I0815 00:08:08.239141   33429 logs.go:123] Gathering logs for dmesg ...
	I0815 00:08:08.239159   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 00:08:08.249874   33429 logs.go:123] Gathering logs for describe nodes ...
	I0815 00:08:08.249896   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 00:08:08.340261   33429 logs.go:123] Gathering logs for kube-controller-manager [4043a5cc95e0b642f3d43fc29910c36484e438e6f5eb9296ff38dcbd460cb280] ...
	I0815 00:08:08.340287   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4043a5cc95e0b642f3d43fc29910c36484e438e6f5eb9296ff38dcbd460cb280"
	I0815 00:08:08.394232   33429 logs.go:123] Gathering logs for CRI-O ...
	I0815 00:08:08.394260   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 00:08:08.466817   33429 logs.go:123] Gathering logs for container status ...
	I0815 00:08:08.466849   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 00:08:08.506450   33429 logs.go:123] Gathering logs for kubelet ...
	I0815 00:08:08.506477   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 00:08:08.573143   33429 logs.go:123] Gathering logs for kube-apiserver [ea70e9f2778e6bfd0482547ef4e30fb6b1e37e3161a709b7913a069a0d6c1249] ...
	I0815 00:08:08.573173   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea70e9f2778e6bfd0482547ef4e30fb6b1e37e3161a709b7913a069a0d6c1249"
	I0815 00:08:08.613210   33429 logs.go:123] Gathering logs for etcd [f16f228580088705d9d978dd7020a6f22abb4f154e076a1986297e2d03e0cdec] ...
	I0815 00:08:08.613235   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f16f228580088705d9d978dd7020a6f22abb4f154e076a1986297e2d03e0cdec"
	I0815 00:08:08.659426   33429 logs.go:123] Gathering logs for coredns [4ba66a3367daf4b53a35259a09c9bd004e02804ed6293ed0baf74ac2ef4f06d3] ...
	I0815 00:08:08.659453   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ba66a3367daf4b53a35259a09c9bd004e02804ed6293ed0baf74ac2ef4f06d3"
	I0815 00:08:08.695176   33429 logs.go:123] Gathering logs for kube-scheduler [bd77b5ecfadb9e4b0eb677138b067975344fac163e26352db7d5ce14d50ed8f0] ...
	I0815 00:08:08.695200   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd77b5ecfadb9e4b0eb677138b067975344fac163e26352db7d5ce14d50ed8f0"
	I0815 00:08:08.732673   33429 logs.go:123] Gathering logs for kube-proxy [e5fd37ee5ee48f313a2bbb8de2bf49949a8ca143f909bf33e1d7a3ca648839a1] ...
	I0815 00:08:08.732699   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e5fd37ee5ee48f313a2bbb8de2bf49949a8ca143f909bf33e1d7a3ca648839a1"
	I0815 00:08:08.762290   33429 logs.go:123] Gathering logs for kindnet [17f6bd6dd22c5288ff6a4dc156ad4dd1d32d2895bfc8398e63705bc5613cf677] ...
	I0815 00:08:08.762314   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17f6bd6dd22c5288ff6a4dc156ad4dd1d32d2895bfc8398e63705bc5613cf677"
	I0815 00:08:11.299374   33429 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 00:08:11.302863   33429 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0815 00:08:11.303608   33429 api_server.go:141] control plane version: v1.31.0
	I0815 00:08:11.303629   33429 api_server.go:131] duration metric: took 3.30302873s to wait for apiserver health ...
	I0815 00:08:11.303638   33429 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 00:08:11.303662   33429 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 00:08:11.303715   33429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 00:08:11.335368   33429 cri.go:89] found id: "ea70e9f2778e6bfd0482547ef4e30fb6b1e37e3161a709b7913a069a0d6c1249"
	I0815 00:08:11.335387   33429 cri.go:89] found id: ""
	I0815 00:08:11.335394   33429 logs.go:276] 1 containers: [ea70e9f2778e6bfd0482547ef4e30fb6b1e37e3161a709b7913a069a0d6c1249]
	I0815 00:08:11.335433   33429 ssh_runner.go:195] Run: which crictl
	I0815 00:08:11.338517   33429 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 00:08:11.338588   33429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 00:08:11.368653   33429 cri.go:89] found id: "f16f228580088705d9d978dd7020a6f22abb4f154e076a1986297e2d03e0cdec"
	I0815 00:08:11.368675   33429 cri.go:89] found id: ""
	I0815 00:08:11.368682   33429 logs.go:276] 1 containers: [f16f228580088705d9d978dd7020a6f22abb4f154e076a1986297e2d03e0cdec]
	I0815 00:08:11.368727   33429 ssh_runner.go:195] Run: which crictl
	I0815 00:08:11.371711   33429 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 00:08:11.371762   33429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 00:08:11.403775   33429 cri.go:89] found id: "4ba66a3367daf4b53a35259a09c9bd004e02804ed6293ed0baf74ac2ef4f06d3"
	I0815 00:08:11.403798   33429 cri.go:89] found id: ""
	I0815 00:08:11.403808   33429 logs.go:276] 1 containers: [4ba66a3367daf4b53a35259a09c9bd004e02804ed6293ed0baf74ac2ef4f06d3]
	I0815 00:08:11.403853   33429 ssh_runner.go:195] Run: which crictl
	I0815 00:08:11.406855   33429 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 00:08:11.406913   33429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 00:08:11.437894   33429 cri.go:89] found id: "bd77b5ecfadb9e4b0eb677138b067975344fac163e26352db7d5ce14d50ed8f0"
	I0815 00:08:11.437911   33429 cri.go:89] found id: ""
	I0815 00:08:11.437918   33429 logs.go:276] 1 containers: [bd77b5ecfadb9e4b0eb677138b067975344fac163e26352db7d5ce14d50ed8f0]
	I0815 00:08:11.437963   33429 ssh_runner.go:195] Run: which crictl
	I0815 00:08:11.440939   33429 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 00:08:11.440996   33429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 00:08:11.472247   33429 cri.go:89] found id: "e5fd37ee5ee48f313a2bbb8de2bf49949a8ca143f909bf33e1d7a3ca648839a1"
	I0815 00:08:11.472267   33429 cri.go:89] found id: ""
	I0815 00:08:11.472274   33429 logs.go:276] 1 containers: [e5fd37ee5ee48f313a2bbb8de2bf49949a8ca143f909bf33e1d7a3ca648839a1]
	I0815 00:08:11.472312   33429 ssh_runner.go:195] Run: which crictl
	I0815 00:08:11.475285   33429 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 00:08:11.475339   33429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 00:08:11.505337   33429 cri.go:89] found id: "4043a5cc95e0b642f3d43fc29910c36484e438e6f5eb9296ff38dcbd460cb280"
	I0815 00:08:11.505359   33429 cri.go:89] found id: ""
	I0815 00:08:11.505367   33429 logs.go:276] 1 containers: [4043a5cc95e0b642f3d43fc29910c36484e438e6f5eb9296ff38dcbd460cb280]
	I0815 00:08:11.505419   33429 ssh_runner.go:195] Run: which crictl
	I0815 00:08:11.508302   33429 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 00:08:11.508356   33429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 00:08:11.539121   33429 cri.go:89] found id: "17f6bd6dd22c5288ff6a4dc156ad4dd1d32d2895bfc8398e63705bc5613cf677"
	I0815 00:08:11.539144   33429 cri.go:89] found id: ""
	I0815 00:08:11.539153   33429 logs.go:276] 1 containers: [17f6bd6dd22c5288ff6a4dc156ad4dd1d32d2895bfc8398e63705bc5613cf677]
	I0815 00:08:11.539199   33429 ssh_runner.go:195] Run: which crictl
	I0815 00:08:11.542054   33429 logs.go:123] Gathering logs for kubelet ...
	I0815 00:08:11.542077   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 00:08:11.611565   33429 logs.go:123] Gathering logs for dmesg ...
	I0815 00:08:11.611596   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 00:08:11.623230   33429 logs.go:123] Gathering logs for etcd [f16f228580088705d9d978dd7020a6f22abb4f154e076a1986297e2d03e0cdec] ...
	I0815 00:08:11.623255   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f16f228580088705d9d978dd7020a6f22abb4f154e076a1986297e2d03e0cdec"
	I0815 00:08:11.670940   33429 logs.go:123] Gathering logs for kindnet [17f6bd6dd22c5288ff6a4dc156ad4dd1d32d2895bfc8398e63705bc5613cf677] ...
	I0815 00:08:11.670967   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17f6bd6dd22c5288ff6a4dc156ad4dd1d32d2895bfc8398e63705bc5613cf677"
	I0815 00:08:11.706879   33429 logs.go:123] Gathering logs for container status ...
	I0815 00:08:11.706906   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 00:08:11.745902   33429 logs.go:123] Gathering logs for kube-controller-manager [4043a5cc95e0b642f3d43fc29910c36484e438e6f5eb9296ff38dcbd460cb280] ...
	I0815 00:08:11.745929   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4043a5cc95e0b642f3d43fc29910c36484e438e6f5eb9296ff38dcbd460cb280"
	I0815 00:08:11.802685   33429 logs.go:123] Gathering logs for CRI-O ...
	I0815 00:08:11.802714   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 00:08:11.873752   33429 logs.go:123] Gathering logs for describe nodes ...
	I0815 00:08:11.873781   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 00:08:11.962736   33429 logs.go:123] Gathering logs for kube-apiserver [ea70e9f2778e6bfd0482547ef4e30fb6b1e37e3161a709b7913a069a0d6c1249] ...
	I0815 00:08:11.962765   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea70e9f2778e6bfd0482547ef4e30fb6b1e37e3161a709b7913a069a0d6c1249"
	I0815 00:08:12.004013   33429 logs.go:123] Gathering logs for coredns [4ba66a3367daf4b53a35259a09c9bd004e02804ed6293ed0baf74ac2ef4f06d3] ...
	I0815 00:08:12.004041   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ba66a3367daf4b53a35259a09c9bd004e02804ed6293ed0baf74ac2ef4f06d3"
	I0815 00:08:12.039680   33429 logs.go:123] Gathering logs for kube-scheduler [bd77b5ecfadb9e4b0eb677138b067975344fac163e26352db7d5ce14d50ed8f0] ...
	I0815 00:08:12.039709   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd77b5ecfadb9e4b0eb677138b067975344fac163e26352db7d5ce14d50ed8f0"
	I0815 00:08:12.079354   33429 logs.go:123] Gathering logs for kube-proxy [e5fd37ee5ee48f313a2bbb8de2bf49949a8ca143f909bf33e1d7a3ca648839a1] ...
	I0815 00:08:12.079381   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e5fd37ee5ee48f313a2bbb8de2bf49949a8ca143f909bf33e1d7a3ca648839a1"
	I0815 00:08:14.620661   33429 system_pods.go:59] 19 kube-system pods found
	I0815 00:08:14.620688   33429 system_pods.go:61] "coredns-6f6b679f8f-c42pc" [c7d6d0e1-376e-4009-b23c-4ec563e9fb5c] Running
	I0815 00:08:14.620693   33429 system_pods.go:61] "csi-hostpath-attacher-0" [fc9a04f6-9b77-46c0-8179-7faf0b4d0508] Running
	I0815 00:08:14.620696   33429 system_pods.go:61] "csi-hostpath-resizer-0" [7832e4c7-4b14-4716-a24c-299d683020e7] Running
	I0815 00:08:14.620700   33429 system_pods.go:61] "csi-hostpathplugin-9bq4q" [20f345c9-95b5-4fdd-9b09-0ef44d9e025c] Running
	I0815 00:08:14.620703   33429 system_pods.go:61] "etcd-addons-877132" [c9fcbdb6-c56f-4565-955e-bd059a243317] Running
	I0815 00:08:14.620706   33429 system_pods.go:61] "kindnet-chbk7" [d5bb12f8-f766-4a6c-96d9-4a736660a5d4] Running
	I0815 00:08:14.620710   33429 system_pods.go:61] "kube-apiserver-addons-877132" [f11ef0cb-06f5-43c2-ab90-9e16415dfbdb] Running
	I0815 00:08:14.620715   33429 system_pods.go:61] "kube-controller-manager-addons-877132" [feefe7f6-b920-4abc-868e-c757b7f0611e] Running
	I0815 00:08:14.620719   33429 system_pods.go:61] "kube-ingress-dns-minikube" [a8fc2d7b-0cd2-425b-a632-15debd9dd0c7] Running
	I0815 00:08:14.620724   33429 system_pods.go:61] "kube-proxy-v6kx7" [ba0854ec-7db4-4e33-9e58-c440a176fab5] Running
	I0815 00:08:14.620728   33429 system_pods.go:61] "kube-scheduler-addons-877132" [711196de-fe86-4df3-9d53-f4e1ccd343e5] Running
	I0815 00:08:14.620733   33429 system_pods.go:61] "metrics-server-8988944d9-sgrxc" [39bb006b-3cb8-4b3f-bd6c-a14e00873f12] Running
	I0815 00:08:14.620741   33429 system_pods.go:61] "nvidia-device-plugin-daemonset-6d62n" [0b96b707-d892-4a7c-9728-5d4ddf5b5465] Running
	I0815 00:08:14.620747   33429 system_pods.go:61] "registry-6fb4cdfc84-r4n2w" [6ba345fc-6428-44c4-a39f-a525f747a85d] Running
	I0815 00:08:14.620755   33429 system_pods.go:61] "registry-proxy-9j2gn" [dafac940-abdc-432d-9a46-cf80da8907aa] Running
	I0815 00:08:14.620759   33429 system_pods.go:61] "snapshot-controller-56fcc65765-fcg26" [94c41682-f8b9-44c9-be9d-f4967e9d88fb] Running
	I0815 00:08:14.620762   33429 system_pods.go:61] "snapshot-controller-56fcc65765-gmh75" [8d111fc4-b50c-4b66-b7ed-f75310edc407] Running
	I0815 00:08:14.620765   33429 system_pods.go:61] "storage-provisioner" [da0204ad-464f-432a-8431-4e0541f190da] Running
	I0815 00:08:14.620771   33429 system_pods.go:61] "tiller-deploy-b48cc5f79-bthmf" [62d076df-bde8-40cf-ab28-b8fba5fea0d6] Running
	I0815 00:08:14.620777   33429 system_pods.go:74] duration metric: took 3.317132352s to wait for pod list to return data ...
	I0815 00:08:14.620786   33429 default_sa.go:34] waiting for default service account to be created ...
	I0815 00:08:14.623061   33429 default_sa.go:45] found service account: "default"
	I0815 00:08:14.623081   33429 default_sa.go:55] duration metric: took 2.290351ms for default service account to be created ...
	I0815 00:08:14.623090   33429 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 00:08:14.630696   33429 system_pods.go:86] 19 kube-system pods found
	I0815 00:08:14.630721   33429 system_pods.go:89] "coredns-6f6b679f8f-c42pc" [c7d6d0e1-376e-4009-b23c-4ec563e9fb5c] Running
	I0815 00:08:14.630729   33429 system_pods.go:89] "csi-hostpath-attacher-0" [fc9a04f6-9b77-46c0-8179-7faf0b4d0508] Running
	I0815 00:08:14.630735   33429 system_pods.go:89] "csi-hostpath-resizer-0" [7832e4c7-4b14-4716-a24c-299d683020e7] Running
	I0815 00:08:14.630741   33429 system_pods.go:89] "csi-hostpathplugin-9bq4q" [20f345c9-95b5-4fdd-9b09-0ef44d9e025c] Running
	I0815 00:08:14.630746   33429 system_pods.go:89] "etcd-addons-877132" [c9fcbdb6-c56f-4565-955e-bd059a243317] Running
	I0815 00:08:14.630752   33429 system_pods.go:89] "kindnet-chbk7" [d5bb12f8-f766-4a6c-96d9-4a736660a5d4] Running
	I0815 00:08:14.630758   33429 system_pods.go:89] "kube-apiserver-addons-877132" [f11ef0cb-06f5-43c2-ab90-9e16415dfbdb] Running
	I0815 00:08:14.630766   33429 system_pods.go:89] "kube-controller-manager-addons-877132" [feefe7f6-b920-4abc-868e-c757b7f0611e] Running
	I0815 00:08:14.630773   33429 system_pods.go:89] "kube-ingress-dns-minikube" [a8fc2d7b-0cd2-425b-a632-15debd9dd0c7] Running
	I0815 00:08:14.630783   33429 system_pods.go:89] "kube-proxy-v6kx7" [ba0854ec-7db4-4e33-9e58-c440a176fab5] Running
	I0815 00:08:14.630790   33429 system_pods.go:89] "kube-scheduler-addons-877132" [711196de-fe86-4df3-9d53-f4e1ccd343e5] Running
	I0815 00:08:14.630798   33429 system_pods.go:89] "metrics-server-8988944d9-sgrxc" [39bb006b-3cb8-4b3f-bd6c-a14e00873f12] Running
	I0815 00:08:14.630809   33429 system_pods.go:89] "nvidia-device-plugin-daemonset-6d62n" [0b96b707-d892-4a7c-9728-5d4ddf5b5465] Running
	I0815 00:08:14.630817   33429 system_pods.go:89] "registry-6fb4cdfc84-r4n2w" [6ba345fc-6428-44c4-a39f-a525f747a85d] Running
	I0815 00:08:14.630827   33429 system_pods.go:89] "registry-proxy-9j2gn" [dafac940-abdc-432d-9a46-cf80da8907aa] Running
	I0815 00:08:14.630834   33429 system_pods.go:89] "snapshot-controller-56fcc65765-fcg26" [94c41682-f8b9-44c9-be9d-f4967e9d88fb] Running
	I0815 00:08:14.630844   33429 system_pods.go:89] "snapshot-controller-56fcc65765-gmh75" [8d111fc4-b50c-4b66-b7ed-f75310edc407] Running
	I0815 00:08:14.630853   33429 system_pods.go:89] "storage-provisioner" [da0204ad-464f-432a-8431-4e0541f190da] Running
	I0815 00:08:14.630859   33429 system_pods.go:89] "tiller-deploy-b48cc5f79-bthmf" [62d076df-bde8-40cf-ab28-b8fba5fea0d6] Running
	I0815 00:08:14.630869   33429 system_pods.go:126] duration metric: took 7.771619ms to wait for k8s-apps to be running ...
	I0815 00:08:14.630880   33429 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 00:08:14.630927   33429 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 00:08:14.641293   33429 system_svc.go:56] duration metric: took 10.409007ms WaitForService to wait for kubelet
	I0815 00:08:14.641320   33429 kubeadm.go:582] duration metric: took 2m9.013343958s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 00:08:14.641343   33429 node_conditions.go:102] verifying NodePressure condition ...
	I0815 00:08:14.644057   33429 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0815 00:08:14.644080   33429 node_conditions.go:123] node cpu capacity is 8
	I0815 00:08:14.644090   33429 node_conditions.go:105] duration metric: took 2.743633ms to run NodePressure ...
	I0815 00:08:14.644101   33429 start.go:241] waiting for startup goroutines ...
	I0815 00:08:14.644107   33429 start.go:246] waiting for cluster config update ...
	I0815 00:08:14.644121   33429 start.go:255] writing updated cluster config ...
	I0815 00:08:14.644346   33429 ssh_runner.go:195] Run: rm -f paused
	I0815 00:08:14.690031   33429 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0815 00:08:14.691992   33429 out.go:177] * Done! kubectl is now configured to use "addons-877132" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 15 00:11:21 addons-877132 crio[1030]: time="2024-08-15 00:11:21.667290168Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86],Size_:4944818,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=145eb94e-2b9e-4e45-b88a-3d1d6ef57e9b name=/runtime.v1.ImageService/ImageStatus
	Aug 15 00:11:21 addons-877132 crio[1030]: time="2024-08-15 00:11:21.667948532Z" level=info msg="Creating container: default/hello-world-app-55bf9c44b4-jw59v/hello-world-app" id=811e0cd4-e075-476c-88b4-28ffcd3459fe name=/runtime.v1.RuntimeService/CreateContainer
	Aug 15 00:11:21 addons-877132 crio[1030]: time="2024-08-15 00:11:21.668041226Z" level=warning msg="Allowed annotations are specified for workload []"
	Aug 15 00:11:21 addons-877132 crio[1030]: time="2024-08-15 00:11:21.683178281Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/27316ecac6d7b0fabf24fb992047dca01c1dc1e53a897e5551c292b138e4352c/merged/etc/passwd: no such file or directory"
	Aug 15 00:11:21 addons-877132 crio[1030]: time="2024-08-15 00:11:21.683209197Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/27316ecac6d7b0fabf24fb992047dca01c1dc1e53a897e5551c292b138e4352c/merged/etc/group: no such file or directory"
	Aug 15 00:11:21 addons-877132 crio[1030]: time="2024-08-15 00:11:21.713091649Z" level=info msg="Created container a77039a7ec09160ed190c729e30ef59309e91cc4d276738f4a10e97171c54eba: default/hello-world-app-55bf9c44b4-jw59v/hello-world-app" id=811e0cd4-e075-476c-88b4-28ffcd3459fe name=/runtime.v1.RuntimeService/CreateContainer
	Aug 15 00:11:21 addons-877132 crio[1030]: time="2024-08-15 00:11:21.713596497Z" level=info msg="Starting container: a77039a7ec09160ed190c729e30ef59309e91cc4d276738f4a10e97171c54eba" id=69c3df8f-751f-492d-bb35-9c1bff92eabf name=/runtime.v1.RuntimeService/StartContainer
	Aug 15 00:11:21 addons-877132 crio[1030]: time="2024-08-15 00:11:21.718525559Z" level=info msg="Started container" PID=11216 containerID=a77039a7ec09160ed190c729e30ef59309e91cc4d276738f4a10e97171c54eba description=default/hello-world-app-55bf9c44b4-jw59v/hello-world-app id=69c3df8f-751f-492d-bb35-9c1bff92eabf name=/runtime.v1.RuntimeService/StartContainer sandboxID=e2e48d3540fb534143f7578821450552a2724443486cfc153768b34a2d2f693d
	Aug 15 00:11:22 addons-877132 crio[1030]: time="2024-08-15 00:11:22.369279616Z" level=info msg="Removing container: 4303733cb1643a6d48d8e2e86b379d2610f3e730ca3c15256d2fdef5280face9" id=9330cad4-53a9-45f6-a6ea-fc1e34e1042a name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 15 00:11:22 addons-877132 crio[1030]: time="2024-08-15 00:11:22.382780146Z" level=info msg="Removed container 4303733cb1643a6d48d8e2e86b379d2610f3e730ca3c15256d2fdef5280face9: kube-system/kube-ingress-dns-minikube/minikube-ingress-dns" id=9330cad4-53a9-45f6-a6ea-fc1e34e1042a name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 15 00:11:23 addons-877132 crio[1030]: time="2024-08-15 00:11:23.892764775Z" level=info msg="Stopping container: 8b6c013e33250c6bcb7d48fe79db0c35b0f196cc41a0c7b7457c198e31fee13d (timeout: 2s)" id=796a1376-cb99-419b-8ae5-8578c17ba820 name=/runtime.v1.RuntimeService/StopContainer
	Aug 15 00:11:25 addons-877132 crio[1030]: time="2024-08-15 00:11:25.898483374Z" level=warning msg="Stopping container 8b6c013e33250c6bcb7d48fe79db0c35b0f196cc41a0c7b7457c198e31fee13d with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=796a1376-cb99-419b-8ae5-8578c17ba820 name=/runtime.v1.RuntimeService/StopContainer
	Aug 15 00:11:25 addons-877132 conmon[6433]: conmon 8b6c013e33250c6bcb7d <ninfo>: container 6445 exited with status 137
	Aug 15 00:11:26 addons-877132 crio[1030]: time="2024-08-15 00:11:26.028023902Z" level=info msg="Stopped container 8b6c013e33250c6bcb7d48fe79db0c35b0f196cc41a0c7b7457c198e31fee13d: ingress-nginx/ingress-nginx-controller-7559cbf597-qfwsb/controller" id=796a1376-cb99-419b-8ae5-8578c17ba820 name=/runtime.v1.RuntimeService/StopContainer
	Aug 15 00:11:26 addons-877132 crio[1030]: time="2024-08-15 00:11:26.028481605Z" level=info msg="Stopping pod sandbox: e904b5086ff6b9ad611ea53e3260a51d8d9922116446bf06cf84b59b0dc131c4" id=cb383282-5d43-4287-8d1e-b6a4525c6ffc name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 15 00:11:26 addons-877132 crio[1030]: time="2024-08-15 00:11:26.031294965Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-SIQZTLT7E2ELBK4O - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-DPMPHDS3ZAPSCFT6 - [0:0]\n-X KUBE-HP-SIQZTLT7E2ELBK4O\n-X KUBE-HP-DPMPHDS3ZAPSCFT6\nCOMMIT\n"
	Aug 15 00:11:26 addons-877132 crio[1030]: time="2024-08-15 00:11:26.032535563Z" level=info msg="Closing host port tcp:80"
	Aug 15 00:11:26 addons-877132 crio[1030]: time="2024-08-15 00:11:26.032569846Z" level=info msg="Closing host port tcp:443"
	Aug 15 00:11:26 addons-877132 crio[1030]: time="2024-08-15 00:11:26.033881278Z" level=info msg="Host port tcp:80 does not have an open socket"
	Aug 15 00:11:26 addons-877132 crio[1030]: time="2024-08-15 00:11:26.033901687Z" level=info msg="Host port tcp:443 does not have an open socket"
	Aug 15 00:11:26 addons-877132 crio[1030]: time="2024-08-15 00:11:26.034094225Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7559cbf597-qfwsb Namespace:ingress-nginx ID:e904b5086ff6b9ad611ea53e3260a51d8d9922116446bf06cf84b59b0dc131c4 UID:382b8687-211c-483f-ae46-64db5d2c2738 NetNS:/var/run/netns/ce926a11-4517-41a1-9901-186bc4c4d261 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Aug 15 00:11:26 addons-877132 crio[1030]: time="2024-08-15 00:11:26.034203210Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7559cbf597-qfwsb from CNI network \"kindnet\" (type=ptp)"
	Aug 15 00:11:26 addons-877132 crio[1030]: time="2024-08-15 00:11:26.062960615Z" level=info msg="Stopped pod sandbox: e904b5086ff6b9ad611ea53e3260a51d8d9922116446bf06cf84b59b0dc131c4" id=cb383282-5d43-4287-8d1e-b6a4525c6ffc name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 15 00:11:26 addons-877132 crio[1030]: time="2024-08-15 00:11:26.380101929Z" level=info msg="Removing container: 8b6c013e33250c6bcb7d48fe79db0c35b0f196cc41a0c7b7457c198e31fee13d" id=1c2b606a-14e4-42f5-a1b2-78ab1b63db5f name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 15 00:11:26 addons-877132 crio[1030]: time="2024-08-15 00:11:26.392251595Z" level=info msg="Removed container 8b6c013e33250c6bcb7d48fe79db0c35b0f196cc41a0c7b7457c198e31fee13d: ingress-nginx/ingress-nginx-controller-7559cbf597-qfwsb/controller" id=1c2b606a-14e4-42f5-a1b2-78ab1b63db5f name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a77039a7ec091       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        9 seconds ago       Running             hello-world-app           0                   e2e48d3540fb5       hello-world-app-55bf9c44b4-jw59v
	4700be58d0014       docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                              2 minutes ago       Running             nginx                     0                   59d991d01214e       nginx
	13564dbfd5a46       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   75dbd279c225c       busybox
	639140631be81       684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66                                                             4 minutes ago       Exited              patch                     2                   2787c4d975b29       ingress-nginx-admission-patch-pds8t
	ce1da5c1eb8b3       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   4 minutes ago       Exited              create                    0                   d13d9edd73ff2       ingress-nginx-admission-create-6bdfx
	70a331a391562       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872        4 minutes ago       Running             metrics-server            0                   f0f38be4fe7eb       metrics-server-8988944d9-sgrxc
	dd563d287505a       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             4 minutes ago       Running             local-path-provisioner    0                   e7ea6d4b55ddc       local-path-provisioner-86d989889c-zjfx8
	4ba66a3367daf       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             5 minutes ago       Running             coredns                   0                   5a0b205e08bed       coredns-6f6b679f8f-c42pc
	03e7fb303164d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   5106a78d90785       storage-provisioner
	17f6bd6dd22c5       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b                           5 minutes ago       Running             kindnet-cni               0                   dfc330b405c09       kindnet-chbk7
	e5fd37ee5ee48       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                                             5 minutes ago       Running             kube-proxy                0                   0a7cee1c53467       kube-proxy-v6kx7
	bd77b5ecfadb9       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                                             5 minutes ago       Running             kube-scheduler            0                   af893679823f2       kube-scheduler-addons-877132
	f16f228580088       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             5 minutes ago       Running             etcd                      0                   d052b7010e20a       etcd-addons-877132
	ea70e9f2778e6       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                                             5 minutes ago       Running             kube-apiserver            0                   9f6fa62a9f394       kube-apiserver-addons-877132
	4043a5cc95e0b       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                                             5 minutes ago       Running             kube-controller-manager   0                   2875939046c1e       kube-controller-manager-addons-877132
	
	
	==> coredns [4ba66a3367daf4b53a35259a09c9bd004e02804ed6293ed0baf74ac2ef4f06d3] <==
	[INFO] 10.244.0.2:47292 - 10763 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000113724s
	[INFO] 10.244.0.2:43420 - 43249 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,rd,ra 91 0.004319366s
	[INFO] 10.244.0.2:43420 - 28402 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,rd,ra 91 0.005696932s
	[INFO] 10.244.0.2:44876 - 48744 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005006536s
	[INFO] 10.244.0.2:44876 - 2155 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.018558843s
	[INFO] 10.244.0.2:58123 - 62458 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004398771s
	[INFO] 10.244.0.2:58123 - 33254 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004753437s
	[INFO] 10.244.0.2:33809 - 16532 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000082026s
	[INFO] 10.244.0.2:33809 - 31121 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000117847s
	[INFO] 10.244.0.20:57076 - 48443 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000153305s
	[INFO] 10.244.0.20:60786 - 431 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000243713s
	[INFO] 10.244.0.20:48667 - 18154 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000120297s
	[INFO] 10.244.0.20:60036 - 57584 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000157371s
	[INFO] 10.244.0.20:35143 - 29134 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000109833s
	[INFO] 10.244.0.20:54584 - 62281 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000111359s
	[INFO] 10.244.0.20:33107 - 11591 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.007471795s
	[INFO] 10.244.0.20:41578 - 30236 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.007561275s
	[INFO] 10.244.0.20:44643 - 23246 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.004784347s
	[INFO] 10.244.0.20:57858 - 50433 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.006739376s
	[INFO] 10.244.0.20:33262 - 49571 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00388293s
	[INFO] 10.244.0.20:54723 - 15767 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004555087s
	[INFO] 10.244.0.20:33820 - 52576 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.001059757s
	[INFO] 10.244.0.20:40606 - 49433 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001834019s
	[INFO] 10.244.0.26:35309 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000170417s
	[INFO] 10.244.0.26:52476 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000113322s
	
	
	==> describe nodes <==
	Name:               addons-877132
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-877132
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168
	                    minikube.k8s.io/name=addons-877132
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_15T00_06_00_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-877132
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 00:05:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-877132
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 00:11:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 00:09:34 +0000   Thu, 15 Aug 2024 00:05:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 00:09:34 +0000   Thu, 15 Aug 2024 00:05:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 00:09:34 +0000   Thu, 15 Aug 2024 00:05:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 00:09:34 +0000   Thu, 15 Aug 2024 00:06:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-877132
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	System Info:
	  Machine ID:                 7ac911fcfea74347829f75c9c0b9cec6
	  System UUID:                c27f2cf4-9042-4197-8c06-a1fdd73beeb7
	  Boot ID:                    adfcefd8-b451-4316-855f-752470c63d29
	  Kernel Version:             5.15.0-1066-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m17s
	  default                     hello-world-app-55bf9c44b4-jw59v           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m30s
	  kube-system                 coredns-6f6b679f8f-c42pc                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     5m25s
	  kube-system                 etcd-addons-877132                         100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         5m31s
	  kube-system                 kindnet-chbk7                              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      5m26s
	  kube-system                 kube-apiserver-addons-877132               250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m31s
	  kube-system                 kube-controller-manager-addons-877132      200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m31s
	  kube-system                 kube-proxy-v6kx7                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m26s
	  kube-system                 kube-scheduler-addons-877132               100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m31s
	  kube-system                 metrics-server-8988944d9-sgrxc             100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (0%!)(MISSING)       0 (0%!)(MISSING)         5m21s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m21s
	  local-path-storage          local-path-provisioner-86d989889c-zjfx8    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%!)(MISSING)  100m (1%!)(MISSING)
	  memory             420Mi (1%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m20s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  5m36s (x8 over 5m36s)  kubelet          Node addons-877132 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m36s (x8 over 5m36s)  kubelet          Node addons-877132 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m36s (x7 over 5m36s)  kubelet          Node addons-877132 status is now: NodeHasSufficientPID
	  Normal   Starting                 5m31s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m31s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  5m31s                  kubelet          Node addons-877132 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m31s                  kubelet          Node addons-877132 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m31s                  kubelet          Node addons-877132 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m27s                  node-controller  Node addons-877132 event: Registered Node addons-877132 in Controller
	  Normal   NodeReady                5m8s                   kubelet          Node addons-877132 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000630] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000616] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000605] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000609] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.594631] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.044972] systemd[1]: /lib/systemd/system/cloud-init-local.service:15: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.005902] systemd[1]: /lib/systemd/system/cloud-init.service:19: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.013048] systemd[1]: /lib/systemd/system/cloud-final.service:9: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.002588] systemd[1]: /lib/systemd/system/cloud-config.service:8: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.017548] systemd[1]: /lib/systemd/system/cloud-init.target:15: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +6.299942] kauditd_printk_skb: 46 callbacks suppressed
	[Aug15 00:09] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000014] ll header: 00000000: 96 a8 0a 24 52 dc de 81 16 9f 0c cd 08 00
	[  +1.000074] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: 96 a8 0a 24 52 dc de 81 16 9f 0c cd 08 00
	[  +2.015815] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: 96 a8 0a 24 52 dc de 81 16 9f 0c cd 08 00
	[  +4.255606] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 96 a8 0a 24 52 dc de 81 16 9f 0c cd 08 00
	[  +8.191208] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 96 a8 0a 24 52 dc de 81 16 9f 0c cd 08 00
	[ +16.126475] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 96 a8 0a 24 52 dc de 81 16 9f 0c cd 08 00
	[Aug15 00:10] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 96 a8 0a 24 52 dc de 81 16 9f 0c cd 08 00
	
	
	==> etcd [f16f228580088705d9d978dd7020a6f22abb4f154e076a1986297e2d03e0cdec] <==
	{"level":"warn","ts":"2024-08-15T00:06:09.661825Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.013375ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/tiller\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T00:06:09.661889Z","caller":"traceutil/trace.go:171","msg":"trace[2058467710] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/tiller; range_end:; response_count:0; response_revision:449; }","duration":"107.078526ms","start":"2024-08-15T00:06:09.554802Z","end":"2024-08-15T00:06:09.661881Z","steps":["trace[2058467710] 'agreement among raft nodes before linearized reading'  (duration: 106.974073ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T00:06:09.960832Z","caller":"traceutil/trace.go:171","msg":"trace[1035712257] linearizableReadLoop","detail":"{readStateIndex:469; appliedIndex:468; }","duration":"186.731601ms","start":"2024-08-15T00:06:09.774078Z","end":"2024-08-15T00:06:09.960809Z","steps":["trace[1035712257] 'read index received'  (duration: 183.924781ms)","trace[1035712257] 'applied index is now lower than readState.Index'  (duration: 2.806055ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-15T00:06:09.961756Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"200.995899ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T00:06:09.962261Z","caller":"traceutil/trace.go:171","msg":"trace[356290812] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:458; }","duration":"201.072963ms","start":"2024-08-15T00:06:09.760739Z","end":"2024-08-15T00:06:09.961812Z","steps":["trace[356290812] 'agreement among raft nodes before linearized reading'  (duration: 200.95514ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T00:06:09.962450Z","caller":"traceutil/trace.go:171","msg":"trace[1489601163] transaction","detail":"{read_only:false; response_revision:458; number_of_response:1; }","duration":"197.664471ms","start":"2024-08-15T00:06:09.764761Z","end":"2024-08-15T00:06:09.962426Z","steps":["trace[1489601163] 'process raft request'  (duration: 192.720086ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T00:06:10.158962Z","caller":"traceutil/trace.go:171","msg":"trace[1685509554] transaction","detail":"{read_only:false; response_revision:464; number_of_response:1; }","duration":"101.328154ms","start":"2024-08-15T00:06:10.057619Z","end":"2024-08-15T00:06:10.158947Z","steps":["trace[1685509554] 'process raft request'  (duration: 98.627594ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T00:06:10.760371Z","caller":"traceutil/trace.go:171","msg":"trace[109234857] transaction","detail":"{read_only:false; response_revision:513; number_of_response:1; }","duration":"184.795977ms","start":"2024-08-15T00:06:10.575558Z","end":"2024-08-15T00:06:10.760354Z","steps":["trace[109234857] 'process raft request'  (duration: 178.834908ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T00:06:10.760803Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.370207ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/kube-system/registry-6fb4cdfc84\" ","response":"range_response_count:1 size:2551"}
	{"level":"info","ts":"2024-08-15T00:06:10.760848Z","caller":"traceutil/trace.go:171","msg":"trace[1951375711] range","detail":"{range_begin:/registry/replicasets/kube-system/registry-6fb4cdfc84; range_end:; response_count:1; response_revision:515; }","duration":"103.424885ms","start":"2024-08-15T00:06:10.657413Z","end":"2024-08-15T00:06:10.760838Z","steps":["trace[1951375711] 'agreement among raft nodes before linearized reading'  (duration: 103.276902ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T00:06:10.760670Z","caller":"traceutil/trace.go:171","msg":"trace[203738500] linearizableReadLoop","detail":"{readStateIndex:525; appliedIndex:524; }","duration":"103.241482ms","start":"2024-08-15T00:06:10.657417Z","end":"2024-08-15T00:06:10.760659Z","steps":["trace[203738500] 'read index received'  (duration: 96.983851ms)","trace[203738500] 'applied index is now lower than readState.Index'  (duration: 6.256736ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-15T00:06:10.761005Z","caller":"traceutil/trace.go:171","msg":"trace[874513999] transaction","detail":"{read_only:false; response_revision:514; number_of_response:1; }","duration":"103.439543ms","start":"2024-08-15T00:06:10.657558Z","end":"2024-08-15T00:06:10.760998Z","steps":["trace[874513999] 'process raft request'  (duration: 102.874463ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T00:06:10.764836Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.833354ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/yakd-dashboard\" ","response":"range_response_count:1 size:883"}
	{"level":"info","ts":"2024-08-15T00:06:10.764930Z","caller":"traceutil/trace.go:171","msg":"trace[798552039] range","detail":"{range_begin:/registry/namespaces/yakd-dashboard; range_end:; response_count:1; response_revision:516; }","duration":"102.934075ms","start":"2024-08-15T00:06:10.661985Z","end":"2024-08-15T00:06:10.764919Z","steps":["trace[798552039] 'agreement among raft nodes before linearized reading'  (duration: 102.728931ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T00:06:10.765323Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.385976ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-877132\" ","response":"range_response_count:1 size:5648"}
	{"level":"info","ts":"2024-08-15T00:06:10.765416Z","caller":"traceutil/trace.go:171","msg":"trace[306473127] range","detail":"{range_begin:/registry/minions/addons-877132; range_end:; response_count:1; response_revision:516; }","duration":"100.480741ms","start":"2024-08-15T00:06:10.664926Z","end":"2024-08-15T00:06:10.765407Z","steps":["trace[306473127] 'agreement among raft nodes before linearized reading'  (duration: 100.370266ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T00:07:08.287875Z","caller":"traceutil/trace.go:171","msg":"trace[1444096404] linearizableReadLoop","detail":"{readStateIndex:1251; appliedIndex:1250; }","duration":"114.749683ms","start":"2024-08-15T00:07:08.173107Z","end":"2024-08-15T00:07:08.287857Z","steps":["trace[1444096404] 'read index received'  (duration: 114.546126ms)","trace[1444096404] 'applied index is now lower than readState.Index'  (duration: 202.358µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-15T00:07:08.287946Z","caller":"traceutil/trace.go:171","msg":"trace[1350768147] transaction","detail":"{read_only:false; response_revision:1219; number_of_response:1; }","duration":"116.664519ms","start":"2024-08-15T00:07:08.171264Z","end":"2024-08-15T00:07:08.287929Z","steps":["trace[1350768147] 'process raft request'  (duration: 116.440866ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T00:07:08.288062Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"114.93862ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-877132\" ","response":"range_response_count:1 size:9170"}
	{"level":"info","ts":"2024-08-15T00:07:08.288095Z","caller":"traceutil/trace.go:171","msg":"trace[205809032] range","detail":"{range_begin:/registry/minions/addons-877132; range_end:; response_count:1; response_revision:1219; }","duration":"114.983891ms","start":"2024-08-15T00:07:08.173100Z","end":"2024-08-15T00:07:08.288084Z","steps":["trace[205809032] 'agreement among raft nodes before linearized reading'  (duration: 114.848983ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T00:07:19.613367Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.767064ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128031214691300417 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:70cc91535af63440>","response":"size:41"}
	{"level":"info","ts":"2024-08-15T00:07:19.763949Z","caller":"traceutil/trace.go:171","msg":"trace[1049542164] transaction","detail":"{read_only:false; response_revision:1244; number_of_response:1; }","duration":"192.04081ms","start":"2024-08-15T00:07:19.571892Z","end":"2024-08-15T00:07:19.763933Z","steps":["trace[1049542164] 'process raft request'  (duration: 169.989844ms)","trace[1049542164] 'compare'  (duration: 21.954224ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-15T00:07:19.765023Z","caller":"traceutil/trace.go:171","msg":"trace[49433671] transaction","detail":"{read_only:false; response_revision:1245; number_of_response:1; }","duration":"150.971344ms","start":"2024-08-15T00:07:19.614039Z","end":"2024-08-15T00:07:19.765010Z","steps":["trace[49433671] 'process raft request'  (duration: 150.874636ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T00:09:20.066162Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.995095ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T00:09:20.066232Z","caller":"traceutil/trace.go:171","msg":"trace[1279599202] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1860; }","duration":"108.076285ms","start":"2024-08-15T00:09:19.958140Z","end":"2024-08-15T00:09:20.066216Z","steps":["trace[1279599202] 'range keys from in-memory index tree'  (duration: 107.949119ms)"],"step_count":1}
	
	
	==> kernel <==
	 00:11:31 up  1:53,  0 users,  load average: 0.15, 0.48, 0.31
	Linux addons-877132 5.15.0-1066-gcp #74~20.04.1-Ubuntu SMP Fri Jul 26 09:28:41 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [17f6bd6dd22c5288ff6a4dc156ad4dd1d32d2895bfc8398e63705bc5613cf677] <==
	E0815 00:10:21.064434       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	W0815 00:10:22.469985       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0815 00:10:22.470022       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0815 00:10:23.555752       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0815 00:10:23.555792       1 main.go:299] handling current node
	I0815 00:10:33.555299       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0815 00:10:33.555337       1 main.go:299] handling current node
	W0815 00:10:34.266809       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0815 00:10:34.266845       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0815 00:10:43.555667       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0815 00:10:43.555710       1 main.go:299] handling current node
	I0815 00:10:53.555221       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0815 00:10:53.555257       1 main.go:299] handling current node
	W0815 00:11:00.296633       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0815 00:11:00.296662       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0815 00:11:03.555385       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0815 00:11:03.555416       1 main.go:299] handling current node
	W0815 00:11:07.457910       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0815 00:11:07.457947       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0815 00:11:13.556053       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0815 00:11:13.556086       1 main.go:299] handling current node
	W0815 00:11:14.562989       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0815 00:11:14.563021       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0815 00:11:23.556046       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0815 00:11:23.556083       1 main.go:299] handling current node
	
	
	==> kube-apiserver [ea70e9f2778e6bfd0482547ef4e30fb6b1e37e3161a709b7913a069a0d6c1249] <==
	I0815 00:08:04.597292       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0815 00:08:22.099434       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:55458: use of closed network connection
	E0815 00:08:22.246304       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:55482: use of closed network connection
	E0815 00:08:50.694888       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.49.2:8443->10.244.0.28:40444: read: connection reset by peer
	I0815 00:08:53.212855       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.106.166.95"}
	I0815 00:08:55.861522       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0815 00:08:56.104430       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0815 00:08:57.263455       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0815 00:09:01.545972       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0815 00:09:01.698738       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.145.87"}
	I0815 00:09:30.366936       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0815 00:09:30.366998       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0815 00:09:30.378927       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0815 00:09:30.378969       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0815 00:09:30.382001       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0815 00:09:30.382053       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0815 00:09:30.391883       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0815 00:09:30.392003       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0815 00:09:30.498706       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0815 00:09:30.498743       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0815 00:09:31.382946       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0815 00:09:31.499329       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0815 00:09:31.509054       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0815 00:11:20.893935       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.109.70.207"}
	E0815 00:11:22.915796       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [4043a5cc95e0b642f3d43fc29910c36484e438e6f5eb9296ff38dcbd460cb280] <==
	W0815 00:10:08.501409       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 00:10:08.501443       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 00:10:10.708192       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 00:10:10.708230       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 00:10:23.114171       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 00:10:23.114208       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 00:10:44.561961       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 00:10:44.562001       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 00:10:44.792049       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 00:10:44.792094       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 00:10:49.815010       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 00:10:49.815050       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 00:10:58.447804       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 00:10:58.447889       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0815 00:11:20.696942       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="12.54421ms"
	I0815 00:11:20.700098       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="3.110621ms"
	I0815 00:11:20.700220       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="75.383µs"
	I0815 00:11:20.703288       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="34.186µs"
	I0815 00:11:22.386057       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="5.560534ms"
	I0815 00:11:22.386153       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="45.056µs"
	I0815 00:11:22.882165       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0815 00:11:22.883571       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7559cbf597" duration="9.901µs"
	I0815 00:11:22.885926       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	W0815 00:11:29.316553       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 00:11:29.316612       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [e5fd37ee5ee48f313a2bbb8de2bf49949a8ca143f909bf33e1d7a3ca648839a1] <==
	I0815 00:06:08.771141       1 server_linux.go:66] "Using iptables proxy"
	I0815 00:06:09.677856       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0815 00:06:09.677943       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 00:06:10.256713       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0815 00:06:10.256859       1 server_linux.go:169] "Using iptables Proxier"
	I0815 00:06:10.265814       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 00:06:10.266511       1 server.go:483] "Version info" version="v1.31.0"
	I0815 00:06:10.266784       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 00:06:10.268405       1 config.go:197] "Starting service config controller"
	I0815 00:06:10.269673       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 00:06:10.269385       1 config.go:326] "Starting node config controller"
	I0815 00:06:10.269842       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 00:06:10.268940       1 config.go:104] "Starting endpoint slice config controller"
	I0815 00:06:10.269934       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 00:06:10.370849       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0815 00:06:10.373009       1 shared_informer.go:320] Caches are synced for service config
	I0815 00:06:10.373024       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [bd77b5ecfadb9e4b0eb677138b067975344fac163e26352db7d5ce14d50ed8f0] <==
	W0815 00:05:58.063472       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0815 00:05:58.063858       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 00:05:58.063491       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0815 00:05:58.063889       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 00:05:58.063534       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0815 00:05:58.063914       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0815 00:05:58.063535       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0815 00:05:58.063935       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 00:05:58.063800       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0815 00:05:58.063952       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0815 00:05:58.984099       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0815 00:05:58.984141       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 00:05:58.991354       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0815 00:05:58.991389       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 00:05:59.015615       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0815 00:05:59.015655       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0815 00:05:59.046949       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0815 00:05:59.046980       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0815 00:05:59.062581       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0815 00:05:59.062629       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 00:05:59.078903       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0815 00:05:59.078935       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 00:05:59.094087       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0815 00:05:59.094126       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0815 00:06:01.762374       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 15 00:11:20 addons-877132 kubelet[1633]: I0815 00:11:20.858072    1633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlq5j\" (UniqueName: \"kubernetes.io/projected/c01330a3-4fe5-40e7-ba66-04fc92ff0e44-kube-api-access-jlq5j\") pod \"hello-world-app-55bf9c44b4-jw59v\" (UID: \"c01330a3-4fe5-40e7-ba66-04fc92ff0e44\") " pod="default/hello-world-app-55bf9c44b4-jw59v"
	Aug 15 00:11:21 addons-877132 kubelet[1633]: I0815 00:11:21.767339    1633 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzsmp\" (UniqueName: \"kubernetes.io/projected/a8fc2d7b-0cd2-425b-a632-15debd9dd0c7-kube-api-access-tzsmp\") pod \"a8fc2d7b-0cd2-425b-a632-15debd9dd0c7\" (UID: \"a8fc2d7b-0cd2-425b-a632-15debd9dd0c7\") "
	Aug 15 00:11:21 addons-877132 kubelet[1633]: I0815 00:11:21.768992    1633 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8fc2d7b-0cd2-425b-a632-15debd9dd0c7-kube-api-access-tzsmp" (OuterVolumeSpecName: "kube-api-access-tzsmp") pod "a8fc2d7b-0cd2-425b-a632-15debd9dd0c7" (UID: "a8fc2d7b-0cd2-425b-a632-15debd9dd0c7"). InnerVolumeSpecName "kube-api-access-tzsmp". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 15 00:11:21 addons-877132 kubelet[1633]: I0815 00:11:21.868311    1633 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-tzsmp\" (UniqueName: \"kubernetes.io/projected/a8fc2d7b-0cd2-425b-a632-15debd9dd0c7-kube-api-access-tzsmp\") on node \"addons-877132\" DevicePath \"\""
	Aug 15 00:11:22 addons-877132 kubelet[1633]: I0815 00:11:22.368372    1633 scope.go:117] "RemoveContainer" containerID="4303733cb1643a6d48d8e2e86b379d2610f3e730ca3c15256d2fdef5280face9"
	Aug 15 00:11:22 addons-877132 kubelet[1633]: I0815 00:11:22.382992    1633 scope.go:117] "RemoveContainer" containerID="4303733cb1643a6d48d8e2e86b379d2610f3e730ca3c15256d2fdef5280face9"
	Aug 15 00:11:22 addons-877132 kubelet[1633]: E0815 00:11:22.383345    1633 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4303733cb1643a6d48d8e2e86b379d2610f3e730ca3c15256d2fdef5280face9\": container with ID starting with 4303733cb1643a6d48d8e2e86b379d2610f3e730ca3c15256d2fdef5280face9 not found: ID does not exist" containerID="4303733cb1643a6d48d8e2e86b379d2610f3e730ca3c15256d2fdef5280face9"
	Aug 15 00:11:22 addons-877132 kubelet[1633]: I0815 00:11:22.383390    1633 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4303733cb1643a6d48d8e2e86b379d2610f3e730ca3c15256d2fdef5280face9"} err="failed to get container status \"4303733cb1643a6d48d8e2e86b379d2610f3e730ca3c15256d2fdef5280face9\": rpc error: code = NotFound desc = could not find container \"4303733cb1643a6d48d8e2e86b379d2610f3e730ca3c15256d2fdef5280face9\": container with ID starting with 4303733cb1643a6d48d8e2e86b379d2610f3e730ca3c15256d2fdef5280face9 not found: ID does not exist"
	Aug 15 00:11:22 addons-877132 kubelet[1633]: I0815 00:11:22.389264    1633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-jw59v" podStartSLOduration=1.791431929 podStartE2EDuration="2.389247938s" podCreationTimestamp="2024-08-15 00:11:20 +0000 UTC" firstStartedPulling="2024-08-15 00:11:21.068385828 +0000 UTC m=+321.107886961" lastFinishedPulling="2024-08-15 00:11:21.666201826 +0000 UTC m=+321.705702970" observedRunningTime="2024-08-15 00:11:22.380800187 +0000 UTC m=+322.420301349" watchObservedRunningTime="2024-08-15 00:11:22.389247938 +0000 UTC m=+322.428749083"
	Aug 15 00:11:24 addons-877132 kubelet[1633]: I0815 00:11:24.072102    1633 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cea3c86-47b9-446f-89d1-bb847ee9969a" path="/var/lib/kubelet/pods/0cea3c86-47b9-446f-89d1-bb847ee9969a/volumes"
	Aug 15 00:11:24 addons-877132 kubelet[1633]: I0815 00:11:24.072461    1633 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56137687-fa9f-4135-8b93-22b9f97e61ad" path="/var/lib/kubelet/pods/56137687-fa9f-4135-8b93-22b9f97e61ad/volumes"
	Aug 15 00:11:24 addons-877132 kubelet[1633]: I0815 00:11:24.072737    1633 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8fc2d7b-0cd2-425b-a632-15debd9dd0c7" path="/var/lib/kubelet/pods/a8fc2d7b-0cd2-425b-a632-15debd9dd0c7/volumes"
	Aug 15 00:11:26 addons-877132 kubelet[1633]: I0815 00:11:26.195955    1633 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twv6n\" (UniqueName: \"kubernetes.io/projected/382b8687-211c-483f-ae46-64db5d2c2738-kube-api-access-twv6n\") pod \"382b8687-211c-483f-ae46-64db5d2c2738\" (UID: \"382b8687-211c-483f-ae46-64db5d2c2738\") "
	Aug 15 00:11:26 addons-877132 kubelet[1633]: I0815 00:11:26.195999    1633 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/382b8687-211c-483f-ae46-64db5d2c2738-webhook-cert\") pod \"382b8687-211c-483f-ae46-64db5d2c2738\" (UID: \"382b8687-211c-483f-ae46-64db5d2c2738\") "
	Aug 15 00:11:26 addons-877132 kubelet[1633]: I0815 00:11:26.197660    1633 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/382b8687-211c-483f-ae46-64db5d2c2738-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "382b8687-211c-483f-ae46-64db5d2c2738" (UID: "382b8687-211c-483f-ae46-64db5d2c2738"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 15 00:11:26 addons-877132 kubelet[1633]: I0815 00:11:26.198115    1633 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/382b8687-211c-483f-ae46-64db5d2c2738-kube-api-access-twv6n" (OuterVolumeSpecName: "kube-api-access-twv6n") pod "382b8687-211c-483f-ae46-64db5d2c2738" (UID: "382b8687-211c-483f-ae46-64db5d2c2738"). InnerVolumeSpecName "kube-api-access-twv6n". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 15 00:11:26 addons-877132 kubelet[1633]: I0815 00:11:26.296381    1633 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-twv6n\" (UniqueName: \"kubernetes.io/projected/382b8687-211c-483f-ae46-64db5d2c2738-kube-api-access-twv6n\") on node \"addons-877132\" DevicePath \"\""
	Aug 15 00:11:26 addons-877132 kubelet[1633]: I0815 00:11:26.296417    1633 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/382b8687-211c-483f-ae46-64db5d2c2738-webhook-cert\") on node \"addons-877132\" DevicePath \"\""
	Aug 15 00:11:26 addons-877132 kubelet[1633]: I0815 00:11:26.379158    1633 scope.go:117] "RemoveContainer" containerID="8b6c013e33250c6bcb7d48fe79db0c35b0f196cc41a0c7b7457c198e31fee13d"
	Aug 15 00:11:26 addons-877132 kubelet[1633]: I0815 00:11:26.392420    1633 scope.go:117] "RemoveContainer" containerID="8b6c013e33250c6bcb7d48fe79db0c35b0f196cc41a0c7b7457c198e31fee13d"
	Aug 15 00:11:26 addons-877132 kubelet[1633]: E0815 00:11:26.392651    1633 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b6c013e33250c6bcb7d48fe79db0c35b0f196cc41a0c7b7457c198e31fee13d\": container with ID starting with 8b6c013e33250c6bcb7d48fe79db0c35b0f196cc41a0c7b7457c198e31fee13d not found: ID does not exist" containerID="8b6c013e33250c6bcb7d48fe79db0c35b0f196cc41a0c7b7457c198e31fee13d"
	Aug 15 00:11:26 addons-877132 kubelet[1633]: I0815 00:11:26.392676    1633 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b6c013e33250c6bcb7d48fe79db0c35b0f196cc41a0c7b7457c198e31fee13d"} err="failed to get container status \"8b6c013e33250c6bcb7d48fe79db0c35b0f196cc41a0c7b7457c198e31fee13d\": rpc error: code = NotFound desc = could not find container \"8b6c013e33250c6bcb7d48fe79db0c35b0f196cc41a0c7b7457c198e31fee13d\": container with ID starting with 8b6c013e33250c6bcb7d48fe79db0c35b0f196cc41a0c7b7457c198e31fee13d not found: ID does not exist"
	Aug 15 00:11:28 addons-877132 kubelet[1633]: I0815 00:11:28.071758    1633 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="382b8687-211c-483f-ae46-64db5d2c2738" path="/var/lib/kubelet/pods/382b8687-211c-483f-ae46-64db5d2c2738/volumes"
	Aug 15 00:11:30 addons-877132 kubelet[1633]: E0815 00:11:30.311237    1633 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723680690311018129,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:613264,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:11:30 addons-877132 kubelet[1633]: E0815 00:11:30.311269    1633 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723680690311018129,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:613264,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [03e7fb303164d2ef427adb835d31e224473b8f74e6cf70ad41f8bf76d02c9292] <==
	I0815 00:06:24.994803       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0815 00:06:25.003166       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0815 00:06:25.003199       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0815 00:06:25.009325       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0815 00:06:25.009364       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7744846a-ef9e-42e9-90e6-1e26a8341167", APIVersion:"v1", ResourceVersion:"943", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-877132_ee44d2d3-4c44-4a23-bef6-1d5ee9ac4c4c became leader
	I0815 00:06:25.009462       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-877132_ee44d2d3-4c44-4a23-bef6-1d5ee9ac4c4c!
	I0815 00:06:25.110558       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-877132_ee44d2d3-4c44-4a23-bef6-1d5ee9ac4c4c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-877132 -n addons-877132
helpers_test.go:261: (dbg) Run:  kubectl --context addons-877132 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (150.46s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (325.75s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.958176ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-8988944d9-sgrxc" [39bb006b-3cb8-4b3f-bd6c-a14e00873f12] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003495046s
addons_test.go:417: (dbg) Run:  kubectl --context addons-877132 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-877132 top pods -n kube-system: exit status 1 (62.764863ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-c42pc, age: 2m42.150229852s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-877132 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-877132 top pods -n kube-system: exit status 1 (66.947303ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-c42pc, age: 2m46.572440627s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-877132 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-877132 top pods -n kube-system: exit status 1 (61.311119ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-c42pc, age: 2m51.181804764s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-877132 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-877132 top pods -n kube-system: exit status 1 (60.470462ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-c42pc, age: 3m0.843456084s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-877132 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-877132 top pods -n kube-system: exit status 1 (61.277222ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-c42pc, age: 3m7.515862478s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-877132 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-877132 top pods -n kube-system: exit status 1 (59.192581ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-c42pc, age: 3m25.607876563s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-877132 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-877132 top pods -n kube-system: exit status 1 (58.665704ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-c42pc, age: 3m48.305939547s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-877132 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-877132 top pods -n kube-system: exit status 1 (59.243749ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-c42pc, age: 4m31.867064426s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-877132 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-877132 top pods -n kube-system: exit status 1 (58.677895ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-c42pc, age: 5m19.320287387s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-877132 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-877132 top pods -n kube-system: exit status 1 (58.453343ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-c42pc, age: 5m52.691149132s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-877132 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-877132 top pods -n kube-system: exit status 1 (59.910577ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-c42pc, age: 6m48.135753869s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-877132 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-877132 top pods -n kube-system: exit status 1 (58.612425ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-c42pc, age: 7m59.544513477s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-877132 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-877132
helpers_test.go:235: (dbg) docker inspect addons-877132:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0a128850adc6c9739319d0ccdc3a9eea5e6209a1908ca45931643f617a920748",
	        "Created": "2024-08-15T00:05:47.313639387Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 34174,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-15T00:05:47.430605182Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:49d4702e5c94195d7796cb79f5fbc9d7cc584c1c41f3c58bf1694d1da009b2f6",
	        "ResolvConfPath": "/var/lib/docker/containers/0a128850adc6c9739319d0ccdc3a9eea5e6209a1908ca45931643f617a920748/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0a128850adc6c9739319d0ccdc3a9eea5e6209a1908ca45931643f617a920748/hostname",
	        "HostsPath": "/var/lib/docker/containers/0a128850adc6c9739319d0ccdc3a9eea5e6209a1908ca45931643f617a920748/hosts",
	        "LogPath": "/var/lib/docker/containers/0a128850adc6c9739319d0ccdc3a9eea5e6209a1908ca45931643f617a920748/0a128850adc6c9739319d0ccdc3a9eea5e6209a1908ca45931643f617a920748-json.log",
	        "Name": "/addons-877132",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-877132:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-877132",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/076552733d609200f850ab223a0029186d490bef6b897443d3c21b9f8104b811-init/diff:/var/lib/docker/overlay2/0205a5511280a28ae3b2781b04e306ca3ba6d39df24866040bde00e4e577fc69/diff",
	                "MergedDir": "/var/lib/docker/overlay2/076552733d609200f850ab223a0029186d490bef6b897443d3c21b9f8104b811/merged",
	                "UpperDir": "/var/lib/docker/overlay2/076552733d609200f850ab223a0029186d490bef6b897443d3c21b9f8104b811/diff",
	                "WorkDir": "/var/lib/docker/overlay2/076552733d609200f850ab223a0029186d490bef6b897443d3c21b9f8104b811/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-877132",
	                "Source": "/var/lib/docker/volumes/addons-877132/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-877132",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-877132",
	                "name.minikube.sigs.k8s.io": "addons-877132",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6a0fbe5e4a1988f743bcdf7dea1f27c6a575bb4991e0dc783f167f6a2c62a4ac",
	            "SandboxKey": "/var/run/docker/netns/6a0fbe5e4a19",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-877132": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "92741e9c6adef761a12cc5aa129b7ea5de95847ec3af60896db99bb0f8592a7c",
	                    "EndpointID": "e03ef1cf5500ca2f0df1215461c824d1aaac3f152cbba89e7dd5d59184418014",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-877132",
	                        "0a128850adc6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-877132 -n addons-877132
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-877132 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-877132 logs -n 25: (1.026507116s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-docker-237330                                                                   | download-docker-237330 | jenkins | v1.33.1 | 15 Aug 24 00:05 UTC | 15 Aug 24 00:05 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-616195   | jenkins | v1.33.1 | 15 Aug 24 00:05 UTC |                     |
	|         | binary-mirror-616195                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:46729                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-616195                                                                     | binary-mirror-616195   | jenkins | v1.33.1 | 15 Aug 24 00:05 UTC | 15 Aug 24 00:05 UTC |
	| addons  | disable dashboard -p                                                                        | addons-877132          | jenkins | v1.33.1 | 15 Aug 24 00:05 UTC |                     |
	|         | addons-877132                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-877132          | jenkins | v1.33.1 | 15 Aug 24 00:05 UTC |                     |
	|         | addons-877132                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-877132 --wait=true                                                                | addons-877132          | jenkins | v1.33.1 | 15 Aug 24 00:05 UTC | 15 Aug 24 00:08 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |         |         |                     |                     |
	| addons  | addons-877132 addons disable                                                                | addons-877132          | jenkins | v1.33.1 | 15 Aug 24 00:08 UTC | 15 Aug 24 00:08 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-877132 addons disable                                                                | addons-877132          | jenkins | v1.33.1 | 15 Aug 24 00:08 UTC | 15 Aug 24 00:08 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| ssh     | addons-877132 ssh cat                                                                       | addons-877132          | jenkins | v1.33.1 | 15 Aug 24 00:08 UTC | 15 Aug 24 00:08 UTC |
	|         | /opt/local-path-provisioner/pvc-56d7ae18-0d09-496f-9576-9fd79c71aa37_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-877132 addons disable                                                                | addons-877132          | jenkins | v1.33.1 | 15 Aug 24 00:08 UTC | 15 Aug 24 00:08 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-877132          | jenkins | v1.33.1 | 15 Aug 24 00:08 UTC | 15 Aug 24 00:08 UTC |
	|         | -p addons-877132                                                                            |                        |         |         |                     |                     |
	| ip      | addons-877132 ip                                                                            | addons-877132          | jenkins | v1.33.1 | 15 Aug 24 00:08 UTC | 15 Aug 24 00:08 UTC |
	| addons  | addons-877132 addons disable                                                                | addons-877132          | jenkins | v1.33.1 | 15 Aug 24 00:08 UTC | 15 Aug 24 00:08 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-877132          | jenkins | v1.33.1 | 15 Aug 24 00:08 UTC | 15 Aug 24 00:08 UTC |
	|         | addons-877132                                                                               |                        |         |         |                     |                     |
	| addons  | addons-877132 addons disable                                                                | addons-877132          | jenkins | v1.33.1 | 15 Aug 24 00:08 UTC | 15 Aug 24 00:08 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-877132          | jenkins | v1.33.1 | 15 Aug 24 00:08 UTC | 15 Aug 24 00:08 UTC |
	|         | -p addons-877132                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-877132          | jenkins | v1.33.1 | 15 Aug 24 00:08 UTC | 15 Aug 24 00:09 UTC |
	|         | addons-877132                                                                               |                        |         |         |                     |                     |
	| addons  | addons-877132 addons disable                                                                | addons-877132          | jenkins | v1.33.1 | 15 Aug 24 00:09 UTC | 15 Aug 24 00:09 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ssh     | addons-877132 ssh curl -s                                                                   | addons-877132          | jenkins | v1.33.1 | 15 Aug 24 00:09 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | addons-877132 addons                                                                        | addons-877132          | jenkins | v1.33.1 | 15 Aug 24 00:09 UTC | 15 Aug 24 00:09 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-877132 addons                                                                        | addons-877132          | jenkins | v1.33.1 | 15 Aug 24 00:09 UTC | 15 Aug 24 00:09 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-877132 ip                                                                            | addons-877132          | jenkins | v1.33.1 | 15 Aug 24 00:11 UTC | 15 Aug 24 00:11 UTC |
	| addons  | addons-877132 addons disable                                                                | addons-877132          | jenkins | v1.33.1 | 15 Aug 24 00:11 UTC | 15 Aug 24 00:11 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-877132 addons disable                                                                | addons-877132          | jenkins | v1.33.1 | 15 Aug 24 00:11 UTC | 15 Aug 24 00:11 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-877132 addons                                                                        | addons-877132          | jenkins | v1.33.1 | 15 Aug 24 00:14 UTC | 15 Aug 24 00:14 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 00:05:23
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 00:05:23.654201   33429 out.go:291] Setting OutFile to fd 1 ...
	I0815 00:05:23.654618   33429 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:05:23.654663   33429 out.go:304] Setting ErrFile to fd 2...
	I0815 00:05:23.654681   33429 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:05:23.655134   33429 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-25263/.minikube/bin
	I0815 00:05:23.655982   33429 out.go:298] Setting JSON to false
	I0815 00:05:23.656755   33429 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6461,"bootTime":1723673863,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 00:05:23.656807   33429 start.go:139] virtualization: kvm guest
	I0815 00:05:23.658523   33429 out.go:177] * [addons-877132] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 00:05:23.659971   33429 notify.go:220] Checking for updates...
	I0815 00:05:23.659982   33429 out.go:177]   - MINIKUBE_LOCATION=19443
	I0815 00:05:23.661059   33429 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 00:05:23.662403   33429 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19443-25263/kubeconfig
	I0815 00:05:23.663582   33429 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-25263/.minikube
	I0815 00:05:23.664704   33429 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 00:05:23.665903   33429 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 00:05:23.667224   33429 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 00:05:23.687835   33429 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0815 00:05:23.687962   33429 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 00:05:23.732664   33429 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-15 00:05:23.724426498 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0815 00:05:23.732801   33429 docker.go:307] overlay module found
	I0815 00:05:23.734680   33429 out.go:177] * Using the docker driver based on user configuration
	I0815 00:05:23.735854   33429 start.go:297] selected driver: docker
	I0815 00:05:23.735875   33429 start.go:901] validating driver "docker" against <nil>
	I0815 00:05:23.735889   33429 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 00:05:23.736663   33429 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 00:05:23.783497   33429 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-15 00:05:23.775412376 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0815 00:05:23.783655   33429 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 00:05:23.783845   33429 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 00:05:23.785330   33429 out.go:177] * Using Docker driver with root privileges
	I0815 00:05:23.786691   33429 cni.go:84] Creating CNI manager for ""
	I0815 00:05:23.786706   33429 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0815 00:05:23.786715   33429 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0815 00:05:23.786761   33429 start.go:340] cluster config:
	{Name:addons-877132 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-877132 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 00:05:23.787982   33429 out.go:177] * Starting "addons-877132" primary control-plane node in "addons-877132" cluster
	I0815 00:05:23.789023   33429 cache.go:121] Beginning downloading kic base image for docker with crio
	I0815 00:05:23.790242   33429 out.go:177] * Pulling base image v0.0.44-1723650208-19443 ...
	I0815 00:05:23.791298   33429 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 00:05:23.791325   33429 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19443-25263/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0815 00:05:23.791336   33429 cache.go:56] Caching tarball of preloaded images
	I0815 00:05:23.791373   33429 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local docker daemon
	I0815 00:05:23.791398   33429 preload.go:172] Found /home/jenkins/minikube-integration/19443-25263/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 00:05:23.791407   33429 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 00:05:23.791714   33429 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/config.json ...
	I0815 00:05:23.791738   33429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/config.json: {Name:mk5c91fbc1c1fde61b892ae0ae5591fd2dd76b2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:05:23.805688   33429 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 to local cache
	I0815 00:05:23.805810   33429 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local cache directory
	I0815 00:05:23.805828   33429 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local cache directory, skipping pull
	I0815 00:05:23.805832   33429 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 exists in cache, skipping pull
	I0815 00:05:23.805840   33429 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 as a tarball
	I0815 00:05:23.805847   33429 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 from local cache
	I0815 00:05:35.207757   33429 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 from cached tarball
	I0815 00:05:35.207800   33429 cache.go:194] Successfully downloaded all kic artifacts
	I0815 00:05:35.207842   33429 start.go:360] acquireMachinesLock for addons-877132: {Name:mk87c4769b05652828bbd513a339608563304c52 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 00:05:35.207952   33429 start.go:364] duration metric: took 89.15µs to acquireMachinesLock for "addons-877132"
	I0815 00:05:35.207977   33429 start.go:93] Provisioning new machine with config: &{Name:addons-877132 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-877132 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 00:05:35.208064   33429 start.go:125] createHost starting for "" (driver="docker")
	I0815 00:05:35.209932   33429 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0815 00:05:35.210140   33429 start.go:159] libmachine.API.Create for "addons-877132" (driver="docker")
	I0815 00:05:35.210169   33429 client.go:168] LocalClient.Create starting
	I0815 00:05:35.210265   33429 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19443-25263/.minikube/certs/ca.pem
	I0815 00:05:35.403780   33429 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19443-25263/.minikube/certs/cert.pem
	I0815 00:05:35.581910   33429 cli_runner.go:164] Run: docker network inspect addons-877132 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0815 00:05:35.597259   33429 cli_runner.go:211] docker network inspect addons-877132 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0815 00:05:35.597337   33429 network_create.go:284] running [docker network inspect addons-877132] to gather additional debugging logs...
	I0815 00:05:35.597356   33429 cli_runner.go:164] Run: docker network inspect addons-877132
	W0815 00:05:35.612656   33429 cli_runner.go:211] docker network inspect addons-877132 returned with exit code 1
	I0815 00:05:35.612683   33429 network_create.go:287] error running [docker network inspect addons-877132]: docker network inspect addons-877132: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-877132 not found
	I0815 00:05:35.612694   33429 network_create.go:289] output of [docker network inspect addons-877132]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-877132 not found
	
	** /stderr **
	I0815 00:05:35.612781   33429 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0815 00:05:35.628068   33429 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0000157c0}
	I0815 00:05:35.628115   33429 network_create.go:124] attempt to create docker network addons-877132 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0815 00:05:35.628158   33429 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-877132 addons-877132
	I0815 00:05:35.684711   33429 network_create.go:108] docker network addons-877132 192.168.49.0/24 created
	I0815 00:05:35.684740   33429 kic.go:121] calculated static IP "192.168.49.2" for the "addons-877132" container
	I0815 00:05:35.684801   33429 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0815 00:05:35.699815   33429 cli_runner.go:164] Run: docker volume create addons-877132 --label name.minikube.sigs.k8s.io=addons-877132 --label created_by.minikube.sigs.k8s.io=true
	I0815 00:05:35.715691   33429 oci.go:103] Successfully created a docker volume addons-877132
	I0815 00:05:35.715787   33429 cli_runner.go:164] Run: docker run --rm --name addons-877132-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-877132 --entrypoint /usr/bin/test -v addons-877132:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 -d /var/lib
	I0815 00:05:42.917047   33429 cli_runner.go:217] Completed: docker run --rm --name addons-877132-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-877132 --entrypoint /usr/bin/test -v addons-877132:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 -d /var/lib: (7.201218931s)
	I0815 00:05:42.917075   33429 oci.go:107] Successfully prepared a docker volume addons-877132
	I0815 00:05:42.917090   33429 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 00:05:42.917109   33429 kic.go:194] Starting extracting preloaded images to volume ...
	I0815 00:05:42.917177   33429 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19443-25263/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-877132:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 -I lz4 -xf /preloaded.tar -C /extractDir
	I0815 00:05:47.252511   33429 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19443-25263/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-877132:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 -I lz4 -xf /preloaded.tar -C /extractDir: (4.335289814s)
	I0815 00:05:47.252538   33429 kic.go:203] duration metric: took 4.335426883s to extract preloaded images to volume ...
	W0815 00:05:47.252667   33429 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0815 00:05:47.252767   33429 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0815 00:05:47.299562   33429 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-877132 --name addons-877132 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-877132 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-877132 --network addons-877132 --ip 192.168.49.2 --volume addons-877132:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002
	I0815 00:05:47.614924   33429 cli_runner.go:164] Run: docker container inspect addons-877132 --format={{.State.Running}}
	I0815 00:05:47.633132   33429 cli_runner.go:164] Run: docker container inspect addons-877132 --format={{.State.Status}}
	I0815 00:05:47.650026   33429 cli_runner.go:164] Run: docker exec addons-877132 stat /var/lib/dpkg/alternatives/iptables
	I0815 00:05:47.690704   33429 oci.go:144] the created container "addons-877132" has a running status.
	I0815 00:05:47.690734   33429 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19443-25263/.minikube/machines/addons-877132/id_rsa...
	I0815 00:05:47.887374   33429 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19443-25263/.minikube/machines/addons-877132/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0815 00:05:47.912208   33429 cli_runner.go:164] Run: docker container inspect addons-877132 --format={{.State.Status}}
	I0815 00:05:47.932744   33429 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0815 00:05:47.932762   33429 kic_runner.go:114] Args: [docker exec --privileged addons-877132 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0815 00:05:47.981634   33429 cli_runner.go:164] Run: docker container inspect addons-877132 --format={{.State.Status}}
	I0815 00:05:47.999627   33429 machine.go:94] provisionDockerMachine start ...
	I0815 00:05:47.999690   33429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877132
	I0815 00:05:48.016577   33429 main.go:141] libmachine: Using SSH client type: native
	I0815 00:05:48.016770   33429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0815 00:05:48.016782   33429 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 00:05:48.232779   33429 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-877132
	
	I0815 00:05:48.232815   33429 ubuntu.go:169] provisioning hostname "addons-877132"
	I0815 00:05:48.232872   33429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877132
	I0815 00:05:48.251859   33429 main.go:141] libmachine: Using SSH client type: native
	I0815 00:05:48.252026   33429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0815 00:05:48.252041   33429 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-877132 && echo "addons-877132" | sudo tee /etc/hostname
	I0815 00:05:48.391228   33429 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-877132
	
	I0815 00:05:48.391307   33429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877132
	I0815 00:05:48.407474   33429 main.go:141] libmachine: Using SSH client type: native
	I0815 00:05:48.407658   33429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0815 00:05:48.407674   33429 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-877132' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-877132/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-877132' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 00:05:48.537347   33429 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 00:05:48.537372   33429 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19443-25263/.minikube CaCertPath:/home/jenkins/minikube-integration/19443-25263/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19443-25263/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19443-25263/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19443-25263/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19443-25263/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19443-25263/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19443-25263/.minikube}
	I0815 00:05:48.537409   33429 ubuntu.go:177] setting up certificates
	I0815 00:05:48.537421   33429 provision.go:84] configureAuth start
	I0815 00:05:48.537467   33429 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-877132
	I0815 00:05:48.553566   33429 provision.go:143] copyHostCerts
	I0815 00:05:48.553637   33429 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-25263/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19443-25263/.minikube/key.pem (1675 bytes)
	I0815 00:05:48.553746   33429 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-25263/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19443-25263/.minikube/ca.pem (1078 bytes)
	I0815 00:05:48.553868   33429 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-25263/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19443-25263/.minikube/cert.pem (1123 bytes)
	I0815 00:05:48.553930   33429 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19443-25263/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19443-25263/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19443-25263/.minikube/certs/ca-key.pem org=jenkins.addons-877132 san=[127.0.0.1 192.168.49.2 addons-877132 localhost minikube]
	I0815 00:05:48.723505   33429 provision.go:177] copyRemoteCerts
	I0815 00:05:48.723557   33429 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 00:05:48.723588   33429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877132
	I0815 00:05:48.739526   33429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19443-25263/.minikube/machines/addons-877132/id_rsa Username:docker}
	I0815 00:05:48.837635   33429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-25263/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0815 00:05:48.857192   33429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-25263/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0815 00:05:48.876384   33429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-25263/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 00:05:48.895738   33429 provision.go:87] duration metric: took 358.301506ms to configureAuth
	I0815 00:05:48.895761   33429 ubuntu.go:193] setting minikube options for container-runtime
	I0815 00:05:48.895946   33429 config.go:182] Loaded profile config "addons-877132": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:05:48.896036   33429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877132
	I0815 00:05:48.911607   33429 main.go:141] libmachine: Using SSH client type: native
	I0815 00:05:48.911755   33429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0815 00:05:48.911770   33429 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 00:05:49.120408   33429 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 00:05:49.120437   33429 machine.go:97] duration metric: took 1.12079224s to provisionDockerMachine
	I0815 00:05:49.120452   33429 client.go:171] duration metric: took 13.910275572s to LocalClient.Create
	I0815 00:05:49.120476   33429 start.go:167] duration metric: took 13.910334619s to libmachine.API.Create "addons-877132"
	I0815 00:05:49.120490   33429 start.go:293] postStartSetup for "addons-877132" (driver="docker")
	I0815 00:05:49.120505   33429 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 00:05:49.120592   33429 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 00:05:49.120645   33429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877132
	I0815 00:05:49.135907   33429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19443-25263/.minikube/machines/addons-877132/id_rsa Username:docker}
	I0815 00:05:49.229819   33429 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 00:05:49.232457   33429 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0815 00:05:49.232497   33429 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0815 00:05:49.232511   33429 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0815 00:05:49.232522   33429 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0815 00:05:49.232534   33429 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-25263/.minikube/addons for local assets ...
	I0815 00:05:49.232593   33429 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-25263/.minikube/files for local assets ...
	I0815 00:05:49.232614   33429 start.go:296] duration metric: took 112.117099ms for postStartSetup
	I0815 00:05:49.232863   33429 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-877132
	I0815 00:05:49.248484   33429 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/config.json ...
	I0815 00:05:49.248733   33429 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 00:05:49.248790   33429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877132
	I0815 00:05:49.263312   33429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19443-25263/.minikube/machines/addons-877132/id_rsa Username:docker}
	I0815 00:05:49.354018   33429 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0815 00:05:49.357822   33429 start.go:128] duration metric: took 14.149744159s to createHost
	I0815 00:05:49.357843   33429 start.go:83] releasing machines lock for "addons-877132", held for 14.149879091s
	I0815 00:05:49.357891   33429 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-877132
	I0815 00:05:49.373827   33429 ssh_runner.go:195] Run: cat /version.json
	I0815 00:05:49.373875   33429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877132
	I0815 00:05:49.373874   33429 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 00:05:49.373952   33429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877132
	I0815 00:05:49.388848   33429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19443-25263/.minikube/machines/addons-877132/id_rsa Username:docker}
	I0815 00:05:49.389550   33429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19443-25263/.minikube/machines/addons-877132/id_rsa Username:docker}
	I0815 00:05:49.544079   33429 ssh_runner.go:195] Run: systemctl --version
	I0815 00:05:49.547823   33429 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 00:05:49.682891   33429 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0815 00:05:49.686787   33429 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 00:05:49.702937   33429 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0815 00:05:49.703005   33429 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 00:05:49.726571   33429 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0815 00:05:49.726594   33429 start.go:495] detecting cgroup driver to use...
	I0815 00:05:49.726621   33429 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0815 00:05:49.726658   33429 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 00:05:49.739246   33429 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 00:05:49.748243   33429 docker.go:217] disabling cri-docker service (if available) ...
	I0815 00:05:49.748292   33429 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 00:05:49.759758   33429 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 00:05:49.771605   33429 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 00:05:49.845117   33429 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 00:05:49.920932   33429 docker.go:233] disabling docker service ...
	I0815 00:05:49.920986   33429 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 00:05:49.936575   33429 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 00:05:49.945679   33429 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 00:05:50.020526   33429 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 00:05:50.097001   33429 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 00:05:50.106254   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 00:05:50.119192   33429 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 00:05:50.119247   33429 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:05:50.126943   33429 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 00:05:50.126988   33429 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:05:50.134580   33429 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:05:50.142147   33429 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:05:50.149864   33429 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 00:05:50.156952   33429 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:05:50.164563   33429 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:05:50.177100   33429 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:05:50.184728   33429 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 00:05:50.191170   33429 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 00:05:50.197628   33429 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 00:05:50.267275   33429 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 00:05:50.361312   33429 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 00:05:50.361385   33429 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 00:05:50.364378   33429 start.go:563] Will wait 60s for crictl version
	I0815 00:05:50.364426   33429 ssh_runner.go:195] Run: which crictl
	I0815 00:05:50.367117   33429 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 00:05:50.397013   33429 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0815 00:05:50.397116   33429 ssh_runner.go:195] Run: crio --version
	I0815 00:05:50.429244   33429 ssh_runner.go:195] Run: crio --version
	I0815 00:05:50.461529   33429 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.24.6 ...
	I0815 00:05:50.462727   33429 cli_runner.go:164] Run: docker network inspect addons-877132 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0815 00:05:50.477480   33429 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0815 00:05:50.480493   33429 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 00:05:50.489550   33429 kubeadm.go:883] updating cluster {Name:addons-877132 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-877132 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 00:05:50.489649   33429 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 00:05:50.489701   33429 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 00:05:50.550221   33429 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 00:05:50.550242   33429 crio.go:433] Images already preloaded, skipping extraction
	I0815 00:05:50.550279   33429 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 00:05:50.579201   33429 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 00:05:50.579222   33429 cache_images.go:84] Images are preloaded, skipping loading
	I0815 00:05:50.579229   33429 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 crio true true} ...
	I0815 00:05:50.579313   33429 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-877132 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-877132 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 00:05:50.579367   33429 ssh_runner.go:195] Run: crio config
	I0815 00:05:50.616570   33429 cni.go:84] Creating CNI manager for ""
	I0815 00:05:50.616587   33429 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0815 00:05:50.616596   33429 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 00:05:50.616615   33429 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-877132 NodeName:addons-877132 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 00:05:50.616737   33429 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-877132"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 00:05:50.616787   33429 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 00:05:50.624272   33429 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 00:05:50.624316   33429 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 00:05:50.631299   33429 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0815 00:05:50.645652   33429 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 00:05:50.660401   33429 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0815 00:05:50.674927   33429 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0815 00:05:50.677624   33429 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 00:05:50.686437   33429 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 00:05:50.757391   33429 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 00:05:50.768422   33429 certs.go:68] Setting up /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132 for IP: 192.168.49.2
	I0815 00:05:50.768442   33429 certs.go:194] generating shared ca certs ...
	I0815 00:05:50.768461   33429 certs.go:226] acquiring lock for ca certs: {Name:mk309157fa54119ea004edf6a36596f33b512455 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:05:50.768591   33429 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19443-25263/.minikube/ca.key
	I0815 00:05:51.184009   33429 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19443-25263/.minikube/ca.crt ...
	I0815 00:05:51.184041   33429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-25263/.minikube/ca.crt: {Name:mk2281b087378b5171f6a3ababac7c23d91f7a2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:05:51.184205   33429 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19443-25263/.minikube/ca.key ...
	I0815 00:05:51.184215   33429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-25263/.minikube/ca.key: {Name:mk7f28e7104766f3bc3ab7a26fee1d70165eac48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:05:51.184292   33429 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19443-25263/.minikube/proxy-client-ca.key
	I0815 00:05:51.306696   33429 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19443-25263/.minikube/proxy-client-ca.crt ...
	I0815 00:05:51.306724   33429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-25263/.minikube/proxy-client-ca.crt: {Name:mk007ceaa696b48cf9b73125039c9ff11d73a36e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:05:51.306876   33429 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19443-25263/.minikube/proxy-client-ca.key ...
	I0815 00:05:51.306886   33429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-25263/.minikube/proxy-client-ca.key: {Name:mk6d0aefb75ddffa612443a728f4dc6aa04f663c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:05:51.307002   33429 certs.go:256] generating profile certs ...
	I0815 00:05:51.307058   33429 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/client.key
	I0815 00:05:51.307071   33429 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/client.crt with IP's: []
	I0815 00:05:51.500129   33429 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/client.crt ...
	I0815 00:05:51.500154   33429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/client.crt: {Name:mk439bedf422c6d72db5acc435a7cea939a2f4f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:05:51.500292   33429 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/client.key ...
	I0815 00:05:51.500301   33429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/client.key: {Name:mk3dc5113cd977cffed1c4766b6188c8c37f9ef1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:05:51.500364   33429 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/apiserver.key.e7c27cbf
	I0815 00:05:51.500381   33429 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/apiserver.crt.e7c27cbf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0815 00:05:51.609033   33429 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/apiserver.crt.e7c27cbf ...
	I0815 00:05:51.609058   33429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/apiserver.crt.e7c27cbf: {Name:mk6703eb6edd26daf5046bd4ca2b634b9cafdd88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:05:51.609196   33429 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/apiserver.key.e7c27cbf ...
	I0815 00:05:51.609208   33429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/apiserver.key.e7c27cbf: {Name:mk478e8492cd5c7d56e515385c8a0a37e3aba211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:05:51.609275   33429 certs.go:381] copying /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/apiserver.crt.e7c27cbf -> /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/apiserver.crt
	I0815 00:05:51.609363   33429 certs.go:385] copying /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/apiserver.key.e7c27cbf -> /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/apiserver.key
	I0815 00:05:51.609426   33429 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/proxy-client.key
	I0815 00:05:51.609444   33429 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/proxy-client.crt with IP's: []
	I0815 00:05:51.900454   33429 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/proxy-client.crt ...
	I0815 00:05:51.900483   33429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/proxy-client.crt: {Name:mkc962b237253f5c62e68e3c76301d6fa0e4fa6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:05:51.900657   33429 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/proxy-client.key ...
	I0815 00:05:51.900668   33429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/proxy-client.key: {Name:mk276eb8609a41c9cf483090c2f7a4fd7e3e1b33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:05:51.900838   33429 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-25263/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 00:05:51.900870   33429 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-25263/.minikube/certs/ca.pem (1078 bytes)
	I0815 00:05:51.900893   33429 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-25263/.minikube/certs/cert.pem (1123 bytes)
	I0815 00:05:51.900916   33429 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-25263/.minikube/certs/key.pem (1675 bytes)
	I0815 00:05:51.901483   33429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-25263/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 00:05:51.921595   33429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-25263/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 00:05:51.940717   33429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-25263/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 00:05:51.960157   33429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-25263/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 00:05:51.979624   33429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0815 00:05:51.998486   33429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 00:05:52.017320   33429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 00:05:52.037272   33429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0815 00:05:52.056417   33429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-25263/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 00:05:52.076144   33429 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 00:05:52.090393   33429 ssh_runner.go:195] Run: openssl version
	I0815 00:05:52.094916   33429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 00:05:52.102405   33429 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 00:05:52.105121   33429 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 00:05 /usr/share/ca-certificates/minikubeCA.pem
	I0815 00:05:52.105164   33429 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 00:05:52.110939   33429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 00:05:52.118348   33429 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 00:05:52.120909   33429 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0815 00:05:52.120944   33429 kubeadm.go:392] StartCluster: {Name:addons-877132 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-877132 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 00:05:52.121035   33429 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 00:05:52.121078   33429 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 00:05:52.150788   33429 cri.go:89] found id: ""
	I0815 00:05:52.150851   33429 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 00:05:52.158002   33429 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 00:05:52.165020   33429 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0815 00:05:52.165057   33429 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 00:05:52.172493   33429 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 00:05:52.172506   33429 kubeadm.go:157] found existing configuration files:
	
	I0815 00:05:52.172543   33429 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 00:05:52.179306   33429 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 00:05:52.179343   33429 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 00:05:52.186501   33429 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 00:05:52.193388   33429 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 00:05:52.193429   33429 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 00:05:52.200229   33429 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 00:05:52.207771   33429 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 00:05:52.207840   33429 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 00:05:52.214934   33429 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 00:05:52.222802   33429 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 00:05:52.222864   33429 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 00:05:52.229685   33429 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0815 00:05:52.260389   33429 kubeadm.go:310] W0815 00:05:52.259734    1303 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 00:05:52.260821   33429 kubeadm.go:310] W0815 00:05:52.260363    1303 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 00:05:52.276476   33429 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1066-gcp\n", err: exit status 1
	I0815 00:05:52.324462   33429 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 00:06:00.767633   33429 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0815 00:06:00.767703   33429 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 00:06:00.767862   33429 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0815 00:06:00.767927   33429 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1066-gcp
	I0815 00:06:00.767962   33429 kubeadm.go:310] OS: Linux
	I0815 00:06:00.768007   33429 kubeadm.go:310] CGROUPS_CPU: enabled
	I0815 00:06:00.768077   33429 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0815 00:06:00.768149   33429 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0815 00:06:00.768219   33429 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0815 00:06:00.768289   33429 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0815 00:06:00.768359   33429 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0815 00:06:00.768410   33429 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0815 00:06:00.768473   33429 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0815 00:06:00.768532   33429 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0815 00:06:00.768655   33429 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 00:06:00.768793   33429 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 00:06:00.768925   33429 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0815 00:06:00.769001   33429 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 00:06:00.770536   33429 out.go:204]   - Generating certificates and keys ...
	I0815 00:06:00.770633   33429 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 00:06:00.770715   33429 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 00:06:00.770788   33429 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0815 00:06:00.770862   33429 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0815 00:06:00.770939   33429 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0815 00:06:00.771012   33429 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0815 00:06:00.771100   33429 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0815 00:06:00.771216   33429 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-877132 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0815 00:06:00.771279   33429 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0815 00:06:00.771436   33429 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-877132 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0815 00:06:00.771528   33429 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0815 00:06:00.771617   33429 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0815 00:06:00.771655   33429 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0815 00:06:00.771707   33429 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 00:06:00.771747   33429 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 00:06:00.771799   33429 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0815 00:06:00.771847   33429 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 00:06:00.771896   33429 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 00:06:00.771941   33429 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 00:06:00.772003   33429 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 00:06:00.772075   33429 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 00:06:00.773209   33429 out.go:204]   - Booting up control plane ...
	I0815 00:06:00.773295   33429 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 00:06:00.773364   33429 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 00:06:00.773424   33429 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 00:06:00.773510   33429 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 00:06:00.773602   33429 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 00:06:00.773645   33429 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 00:06:00.773767   33429 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0815 00:06:00.773912   33429 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0815 00:06:00.773971   33429 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.387534ms
	I0815 00:06:00.774033   33429 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0815 00:06:00.774089   33429 kubeadm.go:310] [api-check] The API server is healthy after 4.001373443s
	I0815 00:06:00.774175   33429 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0815 00:06:00.774282   33429 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0815 00:06:00.774335   33429 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0815 00:06:00.774487   33429 kubeadm.go:310] [mark-control-plane] Marking the node addons-877132 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0815 00:06:00.774541   33429 kubeadm.go:310] [bootstrap-token] Using token: 9cd728.sstuwlg203zlj5vt
	I0815 00:06:00.775824   33429 out.go:204]   - Configuring RBAC rules ...
	I0815 00:06:00.775911   33429 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0815 00:06:00.775980   33429 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0815 00:06:00.776107   33429 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0815 00:06:00.776230   33429 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0815 00:06:00.776336   33429 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0815 00:06:00.776409   33429 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0815 00:06:00.776498   33429 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0815 00:06:00.776540   33429 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0815 00:06:00.776577   33429 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0815 00:06:00.776582   33429 kubeadm.go:310] 
	I0815 00:06:00.776628   33429 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0815 00:06:00.776633   33429 kubeadm.go:310] 
	I0815 00:06:00.776733   33429 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0815 00:06:00.776748   33429 kubeadm.go:310] 
	I0815 00:06:00.776790   33429 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0815 00:06:00.776837   33429 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0815 00:06:00.776884   33429 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0815 00:06:00.776897   33429 kubeadm.go:310] 
	I0815 00:06:00.776948   33429 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0815 00:06:00.776954   33429 kubeadm.go:310] 
	I0815 00:06:00.777017   33429 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0815 00:06:00.777027   33429 kubeadm.go:310] 
	I0815 00:06:00.777098   33429 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0815 00:06:00.777208   33429 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0815 00:06:00.777297   33429 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0815 00:06:00.777306   33429 kubeadm.go:310] 
	I0815 00:06:00.777383   33429 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0815 00:06:00.777447   33429 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0815 00:06:00.777453   33429 kubeadm.go:310] 
	I0815 00:06:00.777520   33429 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 9cd728.sstuwlg203zlj5vt \
	I0815 00:06:00.777619   33429 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0aaee585d8cab38ae3fe05542b0fa84d163b2d1c3df394dbd390896caee3c485 \
	I0815 00:06:00.777641   33429 kubeadm.go:310] 	--control-plane 
	I0815 00:06:00.777647   33429 kubeadm.go:310] 
	I0815 00:06:00.777711   33429 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0815 00:06:00.777716   33429 kubeadm.go:310] 
	I0815 00:06:00.777805   33429 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 9cd728.sstuwlg203zlj5vt \
	I0815 00:06:00.777934   33429 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0aaee585d8cab38ae3fe05542b0fa84d163b2d1c3df394dbd390896caee3c485 
	I0815 00:06:00.777944   33429 cni.go:84] Creating CNI manager for ""
	I0815 00:06:00.777950   33429 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0815 00:06:00.779348   33429 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0815 00:06:00.780465   33429 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0815 00:06:00.783950   33429 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0815 00:06:00.783963   33429 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0815 00:06:00.799808   33429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0815 00:06:00.977777   33429 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 00:06:00.977867   33429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:06:00.977880   33429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-877132 minikube.k8s.io/updated_at=2024_08_15T00_06_00_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168 minikube.k8s.io/name=addons-877132 minikube.k8s.io/primary=true
	I0815 00:06:00.984880   33429 ops.go:34] apiserver oom_adj: -16
	I0815 00:06:01.066466   33429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:06:01.567517   33429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:06:02.066972   33429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:06:02.567491   33429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:06:03.067064   33429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:06:03.566958   33429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:06:04.066976   33429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:06:04.567486   33429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:06:05.067005   33429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:06:05.567422   33429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:06:05.627271   33429 kubeadm.go:1113] duration metric: took 4.649454362s to wait for elevateKubeSystemPrivileges
	I0815 00:06:05.627300   33429 kubeadm.go:394] duration metric: took 13.506358206s to StartCluster
	I0815 00:06:05.627317   33429 settings.go:142] acquiring lock: {Name:mk24702fc665a6ffc1bd2280cb721c81d58ddde1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:06:05.627422   33429 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19443-25263/kubeconfig
	I0815 00:06:05.627782   33429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-25263/kubeconfig: {Name:mk5a4aa2b57f058fc0dbb1196c79fd5fb38108bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:06:05.627943   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0815 00:06:05.627954   33429 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 00:06:05.628018   33429 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0815 00:06:05.628156   33429 config.go:182] Loaded profile config "addons-877132": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:06:05.628201   33429 addons.go:69] Setting cloud-spanner=true in profile "addons-877132"
	I0815 00:06:05.628254   33429 addons.go:234] Setting addon cloud-spanner=true in "addons-877132"
	I0815 00:06:05.628288   33429 host.go:66] Checking if "addons-877132" exists ...
	I0815 00:06:05.628202   33429 addons.go:69] Setting volumesnapshots=true in profile "addons-877132"
	I0815 00:06:05.628342   33429 addons.go:234] Setting addon volumesnapshots=true in "addons-877132"
	I0815 00:06:05.628369   33429 host.go:66] Checking if "addons-877132" exists ...
	I0815 00:06:05.628165   33429 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-877132"
	I0815 00:06:05.628437   33429 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-877132"
	I0815 00:06:05.628459   33429 host.go:66] Checking if "addons-877132" exists ...
	I0815 00:06:05.628174   33429 addons.go:69] Setting registry=true in profile "addons-877132"
	I0815 00:06:05.628560   33429 addons.go:234] Setting addon registry=true in "addons-877132"
	I0815 00:06:05.628601   33429 host.go:66] Checking if "addons-877132" exists ...
	I0815 00:06:05.628177   33429 addons.go:69] Setting metrics-server=true in profile "addons-877132"
	I0815 00:06:05.628697   33429 addons.go:234] Setting addon metrics-server=true in "addons-877132"
	I0815 00:06:05.628730   33429 host.go:66] Checking if "addons-877132" exists ...
	I0815 00:06:05.628818   33429 cli_runner.go:164] Run: docker container inspect addons-877132 --format={{.State.Status}}
	I0815 00:06:05.628181   33429 addons.go:69] Setting storage-provisioner=true in profile "addons-877132"
	I0815 00:06:05.628836   33429 cli_runner.go:164] Run: docker container inspect addons-877132 --format={{.State.Status}}
	I0815 00:06:05.628853   33429 addons.go:234] Setting addon storage-provisioner=true in "addons-877132"
	I0815 00:06:05.628880   33429 host.go:66] Checking if "addons-877132" exists ...
	I0815 00:06:05.628938   33429 cli_runner.go:164] Run: docker container inspect addons-877132 --format={{.State.Status}}
	I0815 00:06:05.629027   33429 cli_runner.go:164] Run: docker container inspect addons-877132 --format={{.State.Status}}
	I0815 00:06:05.629163   33429 cli_runner.go:164] Run: docker container inspect addons-877132 --format={{.State.Status}}
	I0815 00:06:05.629295   33429 cli_runner.go:164] Run: docker container inspect addons-877132 --format={{.State.Status}}
	I0815 00:06:05.628176   33429 addons.go:69] Setting ingress-dns=true in profile "addons-877132"
	I0815 00:06:05.629708   33429 addons.go:234] Setting addon ingress-dns=true in "addons-877132"
	I0815 00:06:05.629750   33429 host.go:66] Checking if "addons-877132" exists ...
	I0815 00:06:05.630183   33429 cli_runner.go:164] Run: docker container inspect addons-877132 --format={{.State.Status}}
	I0815 00:06:05.631193   33429 out.go:177] * Verifying Kubernetes components...
	I0815 00:06:05.632576   33429 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 00:06:05.628189   33429 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-877132"
	I0815 00:06:05.632713   33429 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-877132"
	I0815 00:06:05.632998   33429 cli_runner.go:164] Run: docker container inspect addons-877132 --format={{.State.Status}}
	I0815 00:06:05.628188   33429 addons.go:69] Setting helm-tiller=true in profile "addons-877132"
	I0815 00:06:05.633347   33429 addons.go:234] Setting addon helm-tiller=true in "addons-877132"
	I0815 00:06:05.628192   33429 addons.go:69] Setting ingress=true in profile "addons-877132"
	I0815 00:06:05.633495   33429 addons.go:234] Setting addon ingress=true in "addons-877132"
	I0815 00:06:05.633553   33429 host.go:66] Checking if "addons-877132" exists ...
	I0815 00:06:05.633625   33429 host.go:66] Checking if "addons-877132" exists ...
	I0815 00:06:05.628183   33429 addons.go:69] Setting inspektor-gadget=true in profile "addons-877132"
	I0815 00:06:05.634070   33429 addons.go:234] Setting addon inspektor-gadget=true in "addons-877132"
	I0815 00:06:05.634105   33429 host.go:66] Checking if "addons-877132" exists ...
	I0815 00:06:05.634517   33429 cli_runner.go:164] Run: docker container inspect addons-877132 --format={{.State.Status}}
	I0815 00:06:05.628196   33429 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-877132"
	I0815 00:06:05.636522   33429 cli_runner.go:164] Run: docker container inspect addons-877132 --format={{.State.Status}}
	I0815 00:06:05.636547   33429 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-877132"
	I0815 00:06:05.636607   33429 host.go:66] Checking if "addons-877132" exists ...
	I0815 00:06:05.636740   33429 cli_runner.go:164] Run: docker container inspect addons-877132 --format={{.State.Status}}
	I0815 00:06:05.628163   33429 addons.go:69] Setting yakd=true in profile "addons-877132"
	I0815 00:06:05.637075   33429 addons.go:234] Setting addon yakd=true in "addons-877132"
	I0815 00:06:05.628197   33429 addons.go:69] Setting gcp-auth=true in profile "addons-877132"
	I0815 00:06:05.637104   33429 host.go:66] Checking if "addons-877132" exists ...
	I0815 00:06:05.637110   33429 mustload.go:65] Loading cluster: addons-877132
	I0815 00:06:05.637330   33429 config.go:182] Loaded profile config "addons-877132": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:06:05.637534   33429 cli_runner.go:164] Run: docker container inspect addons-877132 --format={{.State.Status}}
	I0815 00:06:05.637642   33429 cli_runner.go:164] Run: docker container inspect addons-877132 --format={{.State.Status}}
	I0815 00:06:05.628213   33429 addons.go:69] Setting default-storageclass=true in profile "addons-877132"
	I0815 00:06:05.638167   33429 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-877132"
	I0815 00:06:05.628218   33429 addons.go:69] Setting volcano=true in profile "addons-877132"
	I0815 00:06:05.643982   33429 addons.go:234] Setting addon volcano=true in "addons-877132"
	I0815 00:06:05.637044   33429 cli_runner.go:164] Run: docker container inspect addons-877132 --format={{.State.Status}}
	I0815 00:06:05.646007   33429 host.go:66] Checking if "addons-877132" exists ...
	I0815 00:06:05.666042   33429 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0815 00:06:05.666184   33429 cli_runner.go:164] Run: docker container inspect addons-877132 --format={{.State.Status}}
	I0815 00:06:05.667416   33429 cli_runner.go:164] Run: docker container inspect addons-877132 --format={{.State.Status}}
	I0815 00:06:05.668094   33429 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0815 00:06:05.668112   33429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0815 00:06:05.668158   33429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877132
	I0815 00:06:05.669804   33429 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0815 00:06:05.673019   33429 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0815 00:06:05.673079   33429 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0815 00:06:05.673166   33429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877132
	I0815 00:06:05.679897   33429 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0815 00:06:05.679941   33429 out.go:177]   - Using image docker.io/registry:2.8.3
	I0815 00:06:05.681192   33429 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 00:06:05.681415   33429 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0815 00:06:05.681428   33429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0815 00:06:05.681478   33429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877132
	I0815 00:06:05.682634   33429 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 00:06:05.682649   33429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 00:06:05.682697   33429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877132
	I0815 00:06:05.682859   33429 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0815 00:06:05.684119   33429 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0815 00:06:05.684135   33429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0815 00:06:05.684175   33429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877132
	I0815 00:06:05.693193   33429 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0815 00:06:05.693193   33429 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0815 00:06:05.694564   33429 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0815 00:06:05.694595   33429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0815 00:06:05.694652   33429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877132
	I0815 00:06:05.696426   33429 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0815 00:06:05.697529   33429 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0815 00:06:05.699629   33429 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0815 00:06:05.700079   33429 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0815 00:06:05.700096   33429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0815 00:06:05.700247   33429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877132
	I0815 00:06:05.701053   33429 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0815 00:06:05.701069   33429 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0815 00:06:05.701119   33429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877132
	I0815 00:06:05.726356   33429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19443-25263/.minikube/machines/addons-877132/id_rsa Username:docker}
	I0815 00:06:05.727572   33429 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	W0815 00:06:05.729466   33429 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0815 00:06:05.734048   33429 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0815 00:06:05.734072   33429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0815 00:06:05.734131   33429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877132
	I0815 00:06:05.739495   33429 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0815 00:06:05.740707   33429 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0815 00:06:05.740722   33429 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0815 00:06:05.740772   33429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877132
	I0815 00:06:05.742643   33429 host.go:66] Checking if "addons-877132" exists ...
	I0815 00:06:05.746915   33429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19443-25263/.minikube/machines/addons-877132/id_rsa Username:docker}
	I0815 00:06:05.752420   33429 addons.go:234] Setting addon default-storageclass=true in "addons-877132"
	I0815 00:06:05.752463   33429 host.go:66] Checking if "addons-877132" exists ...
	I0815 00:06:05.752930   33429 cli_runner.go:164] Run: docker container inspect addons-877132 --format={{.State.Status}}
	I0815 00:06:05.756364   33429 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-877132"
	I0815 00:06:05.756407   33429 host.go:66] Checking if "addons-877132" exists ...
	I0815 00:06:05.756866   33429 cli_runner.go:164] Run: docker container inspect addons-877132 --format={{.State.Status}}
	I0815 00:06:05.764150   33429 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0815 00:06:05.769890   33429 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0815 00:06:05.769911   33429 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0815 00:06:05.769965   33429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877132
	I0815 00:06:05.771126   33429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19443-25263/.minikube/machines/addons-877132/id_rsa Username:docker}
	I0815 00:06:05.771429   33429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19443-25263/.minikube/machines/addons-877132/id_rsa Username:docker}
	I0815 00:06:05.772693   33429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19443-25263/.minikube/machines/addons-877132/id_rsa Username:docker}
	I0815 00:06:05.774045   33429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19443-25263/.minikube/machines/addons-877132/id_rsa Username:docker}
	I0815 00:06:05.785888   33429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19443-25263/.minikube/machines/addons-877132/id_rsa Username:docker}
	I0815 00:06:05.791410   33429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19443-25263/.minikube/machines/addons-877132/id_rsa Username:docker}
	I0815 00:06:05.793006   33429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19443-25263/.minikube/machines/addons-877132/id_rsa Username:docker}
	I0815 00:06:05.801917   33429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19443-25263/.minikube/machines/addons-877132/id_rsa Username:docker}
	I0815 00:06:05.801923   33429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19443-25263/.minikube/machines/addons-877132/id_rsa Username:docker}
	I0815 00:06:05.803826   33429 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0815 00:06:05.805212   33429 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0815 00:06:05.806397   33429 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0815 00:06:05.807467   33429 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0815 00:06:05.808848   33429 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0815 00:06:05.810186   33429 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 00:06:05.810207   33429 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 00:06:05.810247   33429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877132
	I0815 00:06:05.810340   33429 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0815 00:06:05.811519   33429 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0815 00:06:05.812764   33429 out.go:177]   - Using image docker.io/busybox:stable
	I0815 00:06:05.813852   33429 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0815 00:06:05.813940   33429 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0815 00:06:05.813956   33429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0815 00:06:05.813991   33429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877132
	I0815 00:06:05.816128   33429 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0815 00:06:05.817177   33429 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0815 00:06:05.817189   33429 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0815 00:06:05.817233   33429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877132
	I0815 00:06:05.830418   33429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19443-25263/.minikube/machines/addons-877132/id_rsa Username:docker}
	I0815 00:06:05.830537   33429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19443-25263/.minikube/machines/addons-877132/id_rsa Username:docker}
	I0815 00:06:05.832478   33429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19443-25263/.minikube/machines/addons-877132/id_rsa Username:docker}
	W0815 00:06:05.861368   33429 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0815 00:06:05.861400   33429 retry.go:31] will retry after 244.442357ms: ssh: handshake failed: EOF
	W0815 00:06:05.861473   33429 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0815 00:06:05.861481   33429 retry.go:31] will retry after 180.613371ms: ssh: handshake failed: EOF
	I0815 00:06:05.878964   33429 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 00:06:05.879077   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0815 00:06:06.077440   33429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 00:06:06.170081   33429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0815 00:06:06.174192   33429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0815 00:06:06.178934   33429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0815 00:06:06.271046   33429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0815 00:06:06.278098   33429 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0815 00:06:06.278121   33429 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0815 00:06:06.356678   33429 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0815 00:06:06.356706   33429 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0815 00:06:06.359353   33429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 00:06:06.455385   33429 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0815 00:06:06.455472   33429 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0815 00:06:06.474571   33429 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0815 00:06:06.474654   33429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0815 00:06:06.554397   33429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0815 00:06:06.566563   33429 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0815 00:06:06.566657   33429 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0815 00:06:06.656051   33429 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0815 00:06:06.656137   33429 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0815 00:06:06.656413   33429 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0815 00:06:06.656465   33429 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0815 00:06:06.673757   33429 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0815 00:06:06.673805   33429 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0815 00:06:06.677017   33429 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0815 00:06:06.677039   33429 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0815 00:06:06.773033   33429 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0815 00:06:06.773062   33429 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0815 00:06:06.860223   33429 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0815 00:06:06.860300   33429 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0815 00:06:06.860566   33429 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0815 00:06:06.860609   33429 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0815 00:06:06.868145   33429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0815 00:06:06.960420   33429 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0815 00:06:06.960448   33429 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0815 00:06:07.055232   33429 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 00:06:07.055261   33429 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0815 00:06:07.058115   33429 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0815 00:06:07.058143   33429 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0815 00:06:07.154264   33429 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0815 00:06:07.154294   33429 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0815 00:06:07.155257   33429 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0815 00:06:07.155277   33429 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0815 00:06:07.374343   33429 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0815 00:06:07.374368   33429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0815 00:06:07.374783   33429 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0815 00:06:07.374806   33429 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0815 00:06:07.455705   33429 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0815 00:06:07.455728   33429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0815 00:06:07.456207   33429 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0815 00:06:07.456223   33429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0815 00:06:07.568863   33429 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0815 00:06:07.568893   33429 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0815 00:06:07.569574   33429 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0815 00:06:07.569592   33429 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0815 00:06:07.575132   33429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 00:06:07.659030   33429 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.779918509s)
	I0815 00:06:07.659192   33429 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0815 00:06:07.659130   33429 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.780140591s)
	I0815 00:06:07.660310   33429 node_ready.go:35] waiting up to 6m0s for node "addons-877132" to be "Ready" ...
	I0815 00:06:07.757557   33429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0815 00:06:07.770064   33429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0815 00:06:07.867105   33429 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0815 00:06:07.867183   33429 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0815 00:06:07.965723   33429 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0815 00:06:07.965749   33429 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0815 00:06:08.058081   33429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0815 00:06:08.360078   33429 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0815 00:06:08.360151   33429 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0815 00:06:08.367634   33429 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0815 00:06:08.367702   33429 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0815 00:06:08.378974   33429 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-877132" context rescaled to 1 replicas
	I0815 00:06:08.672860   33429 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0815 00:06:08.672881   33429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0815 00:06:08.676306   33429 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0815 00:06:08.676368   33429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0815 00:06:08.970098   33429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0815 00:06:08.970391   33429 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0815 00:06:08.970437   33429 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0815 00:06:09.362832   33429 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0815 00:06:09.362917   33429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0815 00:06:09.670303   33429 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0815 00:06:09.670329   33429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0815 00:06:09.675093   33429 node_ready.go:53] node "addons-877132" has status "Ready":"False"
	I0815 00:06:09.955187   33429 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0815 00:06:09.955258   33429 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0815 00:06:10.169704   33429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0815 00:06:10.455264   33429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.377776639s)
	I0815 00:06:10.455433   33429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.285322408s)
	I0815 00:06:10.455482   33429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.281226704s)
	I0815 00:06:12.160045   33429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.981063357s)
	I0815 00:06:12.160085   33429 addons.go:475] Verifying addon ingress=true in "addons-877132"
	I0815 00:06:12.160118   33429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.889032263s)
	I0815 00:06:12.160212   33429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.800835507s)
	I0815 00:06:12.160264   33429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.605841296s)
	I0815 00:06:12.160307   33429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (5.292133359s)
	I0815 00:06:12.160370   33429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.585212618s)
	I0815 00:06:12.160706   33429 addons.go:475] Verifying addon metrics-server=true in "addons-877132"
	I0815 00:06:12.162800   33429 out.go:177] * Verifying ingress addon...
	I0815 00:06:12.164520   33429 node_ready.go:53] node "addons-877132" has status "Ready":"False"
	I0815 00:06:12.166394   33429 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	W0815 00:06:12.170677   33429 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0815 00:06:12.177055   33429 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0815 00:06:12.177077   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:12.670053   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:12.964117   33429 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0815 00:06:12.964195   33429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877132
	I0815 00:06:12.990306   33429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19443-25263/.minikube/machines/addons-877132/id_rsa Username:docker}
	I0815 00:06:13.090320   33429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.33266703s)
	W0815 00:06:13.090355   33429 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0815 00:06:13.090375   33429 retry.go:31] will retry after 175.622541ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0815 00:06:13.090390   33429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.320236356s)
	I0815 00:06:13.090435   33429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.032315229s)
	I0815 00:06:13.090462   33429 addons.go:475] Verifying addon registry=true in "addons-877132"
	I0815 00:06:13.090501   33429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.120304899s)
	I0815 00:06:13.091944   33429 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-877132 service yakd-dashboard -n yakd-dashboard
	
	I0815 00:06:13.091950   33429 out.go:177] * Verifying registry addon...
	I0815 00:06:13.093755   33429 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0815 00:06:13.157110   33429 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0815 00:06:13.157140   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:13.171785   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:13.256795   33429 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0815 00:06:13.266211   33429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0815 00:06:13.275869   33429 addons.go:234] Setting addon gcp-auth=true in "addons-877132"
	I0815 00:06:13.275940   33429 host.go:66] Checking if "addons-877132" exists ...
	I0815 00:06:13.276428   33429 cli_runner.go:164] Run: docker container inspect addons-877132 --format={{.State.Status}}
	I0815 00:06:13.297887   33429 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0815 00:06:13.297942   33429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877132
	I0815 00:06:13.314684   33429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19443-25263/.minikube/machines/addons-877132/id_rsa Username:docker}
	I0815 00:06:13.658383   33429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.488585678s)
	I0815 00:06:13.658424   33429 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-877132"
	I0815 00:06:13.658651   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:13.659795   33429 out.go:177] * Verifying csi-hostpath-driver addon...
	I0815 00:06:13.662216   33429 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0815 00:06:13.666005   33429 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0815 00:06:13.666029   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:13.668835   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:14.155522   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:14.165718   33429 node_ready.go:53] node "addons-877132" has status "Ready":"False"
	I0815 00:06:14.166425   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:14.169249   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:14.596258   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:14.664609   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:14.670036   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:15.097283   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:15.166093   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:15.169339   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:15.596326   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:15.665152   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:15.669647   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:16.096862   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:16.165864   33429 node_ready.go:53] node "addons-877132" has status "Ready":"False"
	I0815 00:06:16.166340   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:16.196644   33429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.930396544s)
	I0815 00:06:16.196703   33429 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.898786484s)
	I0815 00:06:16.198662   33429 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0815 00:06:16.198680   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:16.201338   33429 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0815 00:06:16.202541   33429 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0815 00:06:16.202556   33429 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0815 00:06:16.219803   33429 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0815 00:06:16.219831   33429 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0815 00:06:16.267002   33429 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0815 00:06:16.267071   33429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0815 00:06:16.283505   33429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0815 00:06:16.596842   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:16.665282   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:16.670150   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:16.805955   33429 addons.go:475] Verifying addon gcp-auth=true in "addons-877132"
	I0815 00:06:16.807303   33429 out.go:177] * Verifying gcp-auth addon...
	I0815 00:06:16.809043   33429 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0815 00:06:16.811299   33429 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0815 00:06:16.811318   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:17.096734   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:17.165013   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:17.168982   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:17.311617   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:17.597189   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:17.665310   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:17.669469   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:17.811621   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:18.097460   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:18.165631   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:18.169545   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:18.311265   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:18.597070   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:18.664448   33429 node_ready.go:53] node "addons-877132" has status "Ready":"False"
	I0815 00:06:18.697878   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:18.698131   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:18.811809   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:19.097224   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:19.165165   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:19.169296   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:19.317463   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:19.597102   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:19.665308   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:19.669472   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:19.812377   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:20.096809   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:20.165218   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:20.169284   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:20.312161   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:20.596603   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:20.665074   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:20.669058   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:20.812223   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:21.096596   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:21.165086   33429 node_ready.go:53] node "addons-877132" has status "Ready":"False"
	I0815 00:06:21.165136   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:21.168833   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:21.319500   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:21.596822   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:21.665152   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:21.669257   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:21.812305   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:22.096799   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:22.165049   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:22.168933   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:22.311831   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:22.596265   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:22.664599   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:22.669444   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:22.811399   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:23.096811   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:23.165124   33429 node_ready.go:53] node "addons-877132" has status "Ready":"False"
	I0815 00:06:23.165243   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:23.169359   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:23.312662   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:23.597142   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:23.664739   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:23.669884   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:23.811958   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:24.096209   33429 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0815 00:06:24.096232   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:24.164915   33429 node_ready.go:49] node "addons-877132" has status "Ready":"True"
	I0815 00:06:24.164938   33429 node_ready.go:38] duration metric: took 16.503624973s for node "addons-877132" to be "Ready" ...
	I0815 00:06:24.164955   33429 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 00:06:24.166049   33429 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0815 00:06:24.166068   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:24.170142   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:24.173429   33429 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-c42pc" in "kube-system" namespace to be "Ready" ...
	I0815 00:06:24.355959   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:24.597389   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:24.666628   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:24.669410   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:24.812130   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:25.096547   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:25.167043   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:25.169602   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:25.355288   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:25.597426   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:25.666971   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:25.670355   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:25.678083   33429 pod_ready.go:92] pod "coredns-6f6b679f8f-c42pc" in "kube-system" namespace has status "Ready":"True"
	I0815 00:06:25.678106   33429 pod_ready.go:81] duration metric: took 1.504654703s for pod "coredns-6f6b679f8f-c42pc" in "kube-system" namespace to be "Ready" ...
	I0815 00:06:25.678133   33429 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-877132" in "kube-system" namespace to be "Ready" ...
	I0815 00:06:25.682037   33429 pod_ready.go:92] pod "etcd-addons-877132" in "kube-system" namespace has status "Ready":"True"
	I0815 00:06:25.682055   33429 pod_ready.go:81] duration metric: took 3.913671ms for pod "etcd-addons-877132" in "kube-system" namespace to be "Ready" ...
	I0815 00:06:25.682078   33429 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-877132" in "kube-system" namespace to be "Ready" ...
	I0815 00:06:25.686074   33429 pod_ready.go:92] pod "kube-apiserver-addons-877132" in "kube-system" namespace has status "Ready":"True"
	I0815 00:06:25.686092   33429 pod_ready.go:81] duration metric: took 4.003183ms for pod "kube-apiserver-addons-877132" in "kube-system" namespace to be "Ready" ...
	I0815 00:06:25.686104   33429 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-877132" in "kube-system" namespace to be "Ready" ...
	I0815 00:06:25.690123   33429 pod_ready.go:92] pod "kube-controller-manager-addons-877132" in "kube-system" namespace has status "Ready":"True"
	I0815 00:06:25.690142   33429 pod_ready.go:81] duration metric: took 4.029781ms for pod "kube-controller-manager-addons-877132" in "kube-system" namespace to be "Ready" ...
	I0815 00:06:25.690157   33429 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v6kx7" in "kube-system" namespace to be "Ready" ...
	I0815 00:06:25.764591   33429 pod_ready.go:92] pod "kube-proxy-v6kx7" in "kube-system" namespace has status "Ready":"True"
	I0815 00:06:25.764670   33429 pod_ready.go:81] duration metric: took 74.503022ms for pod "kube-proxy-v6kx7" in "kube-system" namespace to be "Ready" ...
	I0815 00:06:25.764686   33429 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-877132" in "kube-system" namespace to be "Ready" ...
	I0815 00:06:25.812299   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:26.097806   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:26.169194   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:26.169487   33429 pod_ready.go:92] pod "kube-scheduler-addons-877132" in "kube-system" namespace has status "Ready":"True"
	I0815 00:06:26.169514   33429 pod_ready.go:81] duration metric: took 404.819415ms for pod "kube-scheduler-addons-877132" in "kube-system" namespace to be "Ready" ...
	I0815 00:06:26.169539   33429 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace to be "Ready" ...
	I0815 00:06:26.172362   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:26.312540   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:26.597942   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:26.666295   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:26.670387   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:26.812404   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:27.097376   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:27.167501   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:27.169841   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:27.312952   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:27.599500   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:27.666954   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:27.669769   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:27.812771   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:28.097661   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:28.167011   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:28.169733   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:28.174188   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:06:28.312769   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:28.597722   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:28.666848   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:28.669482   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:28.812800   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:29.098209   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:29.167180   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:29.169890   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:29.312700   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:29.597754   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:29.665572   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:29.669489   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:29.812665   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:30.157157   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:06:30.166642   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:30.177519   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:30.180207   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:06:30.356588   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:30.597957   33429 kapi.go:107] duration metric: took 17.504196925s to wait for kubernetes.io/minikube-addons=registry ...
	I0815 00:06:30.666243   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:30.670815   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:30.813007   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:31.167613   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:31.169328   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:31.313181   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:31.666720   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:31.669434   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:31.811910   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:32.166506   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:32.169062   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:32.311776   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:32.666775   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:32.670042   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:32.674328   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:06:32.811997   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:33.169603   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:33.170197   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:33.356708   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:33.666304   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:33.670392   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:33.812677   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:34.167381   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:34.170171   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:34.312544   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:34.666532   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:34.669482   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:34.812211   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:35.167257   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:35.169456   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:35.173423   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:06:35.312015   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:35.666706   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:35.669857   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:35.812364   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:36.166598   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:36.169053   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:36.312373   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:36.667821   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:36.671196   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:36.857237   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:37.169468   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:37.170919   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:37.175323   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:06:37.355970   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:37.666841   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:37.670268   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:37.812428   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:38.167080   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:38.170187   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:38.312594   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:38.666054   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:38.669825   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:38.812021   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:39.166241   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:39.267161   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:39.311799   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:39.667701   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:39.670267   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:39.674734   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:06:39.812349   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:40.168015   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:40.169594   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:40.312432   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:40.665674   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:40.669752   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:40.812537   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:41.167876   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:41.169598   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:41.312173   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:41.666948   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:41.670803   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:41.812414   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:42.166289   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:42.169867   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:42.173537   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:06:42.311882   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:42.667753   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:42.670676   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:42.812295   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:43.169618   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:43.169854   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:43.313320   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:43.666307   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:43.670383   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:43.812476   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:44.167127   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:44.170013   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:44.174233   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:06:44.311846   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:44.667126   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:44.669867   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:44.855643   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:45.167763   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:45.170016   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:45.313417   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:45.666129   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:45.670059   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:45.813091   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:46.166678   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:46.169893   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:46.312450   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:46.665829   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:46.670061   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:46.674101   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:06:46.812249   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:47.169598   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:47.169608   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:47.312248   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:47.666873   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:47.669374   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:47.812134   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:48.167158   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:48.170215   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:48.312747   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:48.666203   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:48.670026   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:48.812154   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:49.166461   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:49.169165   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:49.174184   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:06:49.312823   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:49.666717   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:49.669712   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:49.812030   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:50.166358   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:50.170069   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:50.312131   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:50.666409   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:50.669159   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:50.811804   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:51.167643   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:51.170383   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:51.174565   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:06:51.357329   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:51.667694   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:51.673988   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:51.855570   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:52.167630   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:52.171705   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:52.357523   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:52.667416   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:52.671989   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:52.856473   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:53.167342   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:53.170225   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:53.357056   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:53.667785   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:53.670341   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:53.675346   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:06:53.812287   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:54.167962   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:54.169368   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:54.312346   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:54.665866   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:54.670543   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:54.812505   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:55.166866   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:55.169578   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:55.312197   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:55.667048   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:55.670233   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:55.812036   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:56.167650   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:56.169862   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:56.173888   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:06:56.312438   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:56.666120   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:56.670307   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:56.811811   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:57.168190   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:57.171201   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:57.313042   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:57.673029   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:57.675625   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:57.813071   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:58.167075   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:58.170091   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:58.175180   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:06:58.312767   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:58.666795   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:58.669391   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:58.812165   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:59.167267   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:59.170177   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:59.312417   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:06:59.666057   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:06:59.669690   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:06:59.811822   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:00.166831   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:00.170224   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:00.312503   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:00.667657   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:00.676413   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:00.767451   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:00.812517   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:01.166638   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:01.169325   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:01.312692   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:01.666198   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:01.669887   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:01.812554   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:02.168126   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:02.169326   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:02.313091   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:02.667880   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:02.669870   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:02.865938   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:03.167117   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:03.176085   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:03.267575   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:03.367374   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:03.666205   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:03.671232   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:03.812560   33429 kapi.go:107] duration metric: took 47.003516074s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0815 00:07:03.814146   33429 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-877132 cluster.
	I0815 00:07:03.815458   33429 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0815 00:07:03.816787   33429 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0815 00:07:04.166733   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:04.170848   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:04.671507   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:04.684842   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:05.166792   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:05.169699   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:05.666612   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:05.669642   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:05.674348   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:06.166989   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:06.169586   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:06.667233   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:06.670146   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:07.166493   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:07.169125   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:07.667067   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:07.670993   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:07.674543   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:08.166585   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:08.169456   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:08.667276   33429 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:08.670520   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:09.166747   33429 kapi.go:107] duration metric: took 55.504525178s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0815 00:07:09.169549   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:09.670367   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:10.170088   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:10.173925   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:10.670285   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:11.169891   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:11.670347   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:12.169423   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:12.174126   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:12.670730   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:13.169706   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:13.670690   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:14.169616   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:14.670409   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:14.675991   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:15.169947   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:15.670905   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:16.170207   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:16.670228   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:17.169428   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:17.173681   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:17.670291   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:18.169163   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:18.670150   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:19.169975   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:19.174116   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:19.768032   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:20.171759   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:20.670635   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:21.170137   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:21.670728   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:21.673692   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:22.169711   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:22.670073   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:23.170251   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:23.670243   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:23.674950   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:24.169467   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:24.670307   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:25.169275   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:25.670140   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:26.170211   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:26.174259   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:26.670923   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:27.169802   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:27.670058   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:28.170245   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:28.180361   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:28.671837   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:29.174730   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:29.671621   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:30.176404   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:30.259231   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:30.671253   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:31.170859   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:31.670796   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:32.170361   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:32.670998   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:32.674300   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:33.170206   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:33.671980   33429 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:34.170420   33429 kapi.go:107] duration metric: took 1m22.004022687s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0815 00:07:34.172004   33429 out.go:177] * Enabled addons: storage-provisioner, ingress-dns, nvidia-device-plugin, cloud-spanner, helm-tiller, metrics-server, default-storageclass, inspektor-gadget, yakd, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I0815 00:07:34.173191   33429 addons.go:510] duration metric: took 1m28.545170819s for enable addons: enabled=[storage-provisioner ingress-dns nvidia-device-plugin cloud-spanner helm-tiller metrics-server default-storageclass inspektor-gadget yakd volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I0815 00:07:35.175895   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:37.674777   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:40.174631   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:42.174721   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:44.674328   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:46.675786   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:49.174408   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:51.674873   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:54.174351   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:56.174565   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:58.174795   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:08:00.175420   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:08:02.674741   33429 pod_ready.go:102] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"False"
	I0815 00:08:04.674774   33429 pod_ready.go:92] pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace has status "Ready":"True"
	I0815 00:08:04.674806   33429 pod_ready.go:81] duration metric: took 1m38.505250087s for pod "metrics-server-8988944d9-sgrxc" in "kube-system" namespace to be "Ready" ...
	I0815 00:08:04.674822   33429 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-6d62n" in "kube-system" namespace to be "Ready" ...
	I0815 00:08:04.678550   33429 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-6d62n" in "kube-system" namespace has status "Ready":"True"
	I0815 00:08:04.678569   33429 pod_ready.go:81] duration metric: took 3.739721ms for pod "nvidia-device-plugin-daemonset-6d62n" in "kube-system" namespace to be "Ready" ...
	I0815 00:08:04.678586   33429 pod_ready.go:38] duration metric: took 1m40.513617774s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 00:08:04.678603   33429 api_server.go:52] waiting for apiserver process to appear ...
	I0815 00:08:04.678630   33429 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 00:08:04.678677   33429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 00:08:04.710676   33429 cri.go:89] found id: "ea70e9f2778e6bfd0482547ef4e30fb6b1e37e3161a709b7913a069a0d6c1249"
	I0815 00:08:04.710700   33429 cri.go:89] found id: ""
	I0815 00:08:04.710708   33429 logs.go:276] 1 containers: [ea70e9f2778e6bfd0482547ef4e30fb6b1e37e3161a709b7913a069a0d6c1249]
	I0815 00:08:04.710757   33429 ssh_runner.go:195] Run: which crictl
	I0815 00:08:04.713725   33429 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 00:08:04.713779   33429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 00:08:04.744311   33429 cri.go:89] found id: "f16f228580088705d9d978dd7020a6f22abb4f154e076a1986297e2d03e0cdec"
	I0815 00:08:04.744335   33429 cri.go:89] found id: ""
	I0815 00:08:04.744345   33429 logs.go:276] 1 containers: [f16f228580088705d9d978dd7020a6f22abb4f154e076a1986297e2d03e0cdec]
	I0815 00:08:04.744387   33429 ssh_runner.go:195] Run: which crictl
	I0815 00:08:04.747394   33429 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 00:08:04.747437   33429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 00:08:04.777949   33429 cri.go:89] found id: "4ba66a3367daf4b53a35259a09c9bd004e02804ed6293ed0baf74ac2ef4f06d3"
	I0815 00:08:04.777966   33429 cri.go:89] found id: ""
	I0815 00:08:04.777973   33429 logs.go:276] 1 containers: [4ba66a3367daf4b53a35259a09c9bd004e02804ed6293ed0baf74ac2ef4f06d3]
	I0815 00:08:04.778010   33429 ssh_runner.go:195] Run: which crictl
	I0815 00:08:04.780902   33429 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 00:08:04.780976   33429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 00:08:04.812184   33429 cri.go:89] found id: "bd77b5ecfadb9e4b0eb677138b067975344fac163e26352db7d5ce14d50ed8f0"
	I0815 00:08:04.812204   33429 cri.go:89] found id: ""
	I0815 00:08:04.812213   33429 logs.go:276] 1 containers: [bd77b5ecfadb9e4b0eb677138b067975344fac163e26352db7d5ce14d50ed8f0]
	I0815 00:08:04.812254   33429 ssh_runner.go:195] Run: which crictl
	I0815 00:08:04.815194   33429 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 00:08:04.815263   33429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 00:08:04.845303   33429 cri.go:89] found id: "e5fd37ee5ee48f313a2bbb8de2bf49949a8ca143f909bf33e1d7a3ca648839a1"
	I0815 00:08:04.845321   33429 cri.go:89] found id: ""
	I0815 00:08:04.845329   33429 logs.go:276] 1 containers: [e5fd37ee5ee48f313a2bbb8de2bf49949a8ca143f909bf33e1d7a3ca648839a1]
	I0815 00:08:04.845367   33429 ssh_runner.go:195] Run: which crictl
	I0815 00:08:04.848510   33429 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 00:08:04.848570   33429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 00:08:04.879573   33429 cri.go:89] found id: "4043a5cc95e0b642f3d43fc29910c36484e438e6f5eb9296ff38dcbd460cb280"
	I0815 00:08:04.879594   33429 cri.go:89] found id: ""
	I0815 00:08:04.879601   33429 logs.go:276] 1 containers: [4043a5cc95e0b642f3d43fc29910c36484e438e6f5eb9296ff38dcbd460cb280]
	I0815 00:08:04.879654   33429 ssh_runner.go:195] Run: which crictl
	I0815 00:08:04.882866   33429 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 00:08:04.882926   33429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 00:08:04.913837   33429 cri.go:89] found id: "17f6bd6dd22c5288ff6a4dc156ad4dd1d32d2895bfc8398e63705bc5613cf677"
	I0815 00:08:04.913859   33429 cri.go:89] found id: ""
	I0815 00:08:04.913866   33429 logs.go:276] 1 containers: [17f6bd6dd22c5288ff6a4dc156ad4dd1d32d2895bfc8398e63705bc5613cf677]
	I0815 00:08:04.913905   33429 ssh_runner.go:195] Run: which crictl
	I0815 00:08:04.917007   33429 logs.go:123] Gathering logs for kube-proxy [e5fd37ee5ee48f313a2bbb8de2bf49949a8ca143f909bf33e1d7a3ca648839a1] ...
	I0815 00:08:04.917030   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e5fd37ee5ee48f313a2bbb8de2bf49949a8ca143f909bf33e1d7a3ca648839a1"
	I0815 00:08:04.947729   33429 logs.go:123] Gathering logs for kindnet [17f6bd6dd22c5288ff6a4dc156ad4dd1d32d2895bfc8398e63705bc5613cf677] ...
	I0815 00:08:04.947755   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17f6bd6dd22c5288ff6a4dc156ad4dd1d32d2895bfc8398e63705bc5613cf677"
	I0815 00:08:04.983589   33429 logs.go:123] Gathering logs for dmesg ...
	I0815 00:08:04.983615   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 00:08:04.995473   33429 logs.go:123] Gathering logs for describe nodes ...
	I0815 00:08:04.995501   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 00:08:05.087662   33429 logs.go:123] Gathering logs for kube-apiserver [ea70e9f2778e6bfd0482547ef4e30fb6b1e37e3161a709b7913a069a0d6c1249] ...
	I0815 00:08:05.087690   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea70e9f2778e6bfd0482547ef4e30fb6b1e37e3161a709b7913a069a0d6c1249"
	I0815 00:08:05.129108   33429 logs.go:123] Gathering logs for coredns [4ba66a3367daf4b53a35259a09c9bd004e02804ed6293ed0baf74ac2ef4f06d3] ...
	I0815 00:08:05.129137   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ba66a3367daf4b53a35259a09c9bd004e02804ed6293ed0baf74ac2ef4f06d3"
	I0815 00:08:05.164587   33429 logs.go:123] Gathering logs for kube-scheduler [bd77b5ecfadb9e4b0eb677138b067975344fac163e26352db7d5ce14d50ed8f0] ...
	I0815 00:08:05.164624   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd77b5ecfadb9e4b0eb677138b067975344fac163e26352db7d5ce14d50ed8f0"
	I0815 00:08:05.203248   33429 logs.go:123] Gathering logs for kubelet ...
	I0815 00:08:05.203273   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 00:08:05.270185   33429 logs.go:123] Gathering logs for etcd [f16f228580088705d9d978dd7020a6f22abb4f154e076a1986297e2d03e0cdec] ...
	I0815 00:08:05.270214   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f16f228580088705d9d978dd7020a6f22abb4f154e076a1986297e2d03e0cdec"
	I0815 00:08:05.317054   33429 logs.go:123] Gathering logs for kube-controller-manager [4043a5cc95e0b642f3d43fc29910c36484e438e6f5eb9296ff38dcbd460cb280] ...
	I0815 00:08:05.317083   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4043a5cc95e0b642f3d43fc29910c36484e438e6f5eb9296ff38dcbd460cb280"
	I0815 00:08:05.370222   33429 logs.go:123] Gathering logs for CRI-O ...
	I0815 00:08:05.370252   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 00:08:05.446168   33429 logs.go:123] Gathering logs for container status ...
	I0815 00:08:05.446204   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 00:08:07.987446   33429 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 00:08:08.000573   33429 api_server.go:72] duration metric: took 2m2.372588715s to wait for apiserver process to appear ...
	I0815 00:08:08.000594   33429 api_server.go:88] waiting for apiserver healthz status ...
	I0815 00:08:08.000627   33429 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 00:08:08.000662   33429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 00:08:08.031934   33429 cri.go:89] found id: "ea70e9f2778e6bfd0482547ef4e30fb6b1e37e3161a709b7913a069a0d6c1249"
	I0815 00:08:08.031958   33429 cri.go:89] found id: ""
	I0815 00:08:08.031967   33429 logs.go:276] 1 containers: [ea70e9f2778e6bfd0482547ef4e30fb6b1e37e3161a709b7913a069a0d6c1249]
	I0815 00:08:08.032018   33429 ssh_runner.go:195] Run: which crictl
	I0815 00:08:08.034976   33429 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 00:08:08.035037   33429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 00:08:08.065162   33429 cri.go:89] found id: "f16f228580088705d9d978dd7020a6f22abb4f154e076a1986297e2d03e0cdec"
	I0815 00:08:08.065186   33429 cri.go:89] found id: ""
	I0815 00:08:08.065194   33429 logs.go:276] 1 containers: [f16f228580088705d9d978dd7020a6f22abb4f154e076a1986297e2d03e0cdec]
	I0815 00:08:08.065236   33429 ssh_runner.go:195] Run: which crictl
	I0815 00:08:08.068160   33429 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 00:08:08.068208   33429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 00:08:08.099502   33429 cri.go:89] found id: "4ba66a3367daf4b53a35259a09c9bd004e02804ed6293ed0baf74ac2ef4f06d3"
	I0815 00:08:08.099523   33429 cri.go:89] found id: ""
	I0815 00:08:08.099531   33429 logs.go:276] 1 containers: [4ba66a3367daf4b53a35259a09c9bd004e02804ed6293ed0baf74ac2ef4f06d3]
	I0815 00:08:08.099578   33429 ssh_runner.go:195] Run: which crictl
	I0815 00:08:08.102636   33429 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 00:08:08.102683   33429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 00:08:08.134129   33429 cri.go:89] found id: "bd77b5ecfadb9e4b0eb677138b067975344fac163e26352db7d5ce14d50ed8f0"
	I0815 00:08:08.134149   33429 cri.go:89] found id: ""
	I0815 00:08:08.134157   33429 logs.go:276] 1 containers: [bd77b5ecfadb9e4b0eb677138b067975344fac163e26352db7d5ce14d50ed8f0]
	I0815 00:08:08.134193   33429 ssh_runner.go:195] Run: which crictl
	I0815 00:08:08.137077   33429 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 00:08:08.137118   33429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 00:08:08.169612   33429 cri.go:89] found id: "e5fd37ee5ee48f313a2bbb8de2bf49949a8ca143f909bf33e1d7a3ca648839a1"
	I0815 00:08:08.169633   33429 cri.go:89] found id: ""
	I0815 00:08:08.169643   33429 logs.go:276] 1 containers: [e5fd37ee5ee48f313a2bbb8de2bf49949a8ca143f909bf33e1d7a3ca648839a1]
	I0815 00:08:08.169693   33429 ssh_runner.go:195] Run: which crictl
	I0815 00:08:08.173000   33429 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 00:08:08.173051   33429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 00:08:08.203461   33429 cri.go:89] found id: "4043a5cc95e0b642f3d43fc29910c36484e438e6f5eb9296ff38dcbd460cb280"
	I0815 00:08:08.203485   33429 cri.go:89] found id: ""
	I0815 00:08:08.203494   33429 logs.go:276] 1 containers: [4043a5cc95e0b642f3d43fc29910c36484e438e6f5eb9296ff38dcbd460cb280]
	I0815 00:08:08.203533   33429 ssh_runner.go:195] Run: which crictl
	I0815 00:08:08.206389   33429 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 00:08:08.206430   33429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 00:08:08.236086   33429 cri.go:89] found id: "17f6bd6dd22c5288ff6a4dc156ad4dd1d32d2895bfc8398e63705bc5613cf677"
	I0815 00:08:08.236109   33429 cri.go:89] found id: ""
	I0815 00:08:08.236119   33429 logs.go:276] 1 containers: [17f6bd6dd22c5288ff6a4dc156ad4dd1d32d2895bfc8398e63705bc5613cf677]
	I0815 00:08:08.236166   33429 ssh_runner.go:195] Run: which crictl
	I0815 00:08:08.239141   33429 logs.go:123] Gathering logs for dmesg ...
	I0815 00:08:08.239159   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 00:08:08.249874   33429 logs.go:123] Gathering logs for describe nodes ...
	I0815 00:08:08.249896   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 00:08:08.340261   33429 logs.go:123] Gathering logs for kube-controller-manager [4043a5cc95e0b642f3d43fc29910c36484e438e6f5eb9296ff38dcbd460cb280] ...
	I0815 00:08:08.340287   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4043a5cc95e0b642f3d43fc29910c36484e438e6f5eb9296ff38dcbd460cb280"
	I0815 00:08:08.394232   33429 logs.go:123] Gathering logs for CRI-O ...
	I0815 00:08:08.394260   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 00:08:08.466817   33429 logs.go:123] Gathering logs for container status ...
	I0815 00:08:08.466849   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 00:08:08.506450   33429 logs.go:123] Gathering logs for kubelet ...
	I0815 00:08:08.506477   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 00:08:08.573143   33429 logs.go:123] Gathering logs for kube-apiserver [ea70e9f2778e6bfd0482547ef4e30fb6b1e37e3161a709b7913a069a0d6c1249] ...
	I0815 00:08:08.573173   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea70e9f2778e6bfd0482547ef4e30fb6b1e37e3161a709b7913a069a0d6c1249"
	I0815 00:08:08.613210   33429 logs.go:123] Gathering logs for etcd [f16f228580088705d9d978dd7020a6f22abb4f154e076a1986297e2d03e0cdec] ...
	I0815 00:08:08.613235   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f16f228580088705d9d978dd7020a6f22abb4f154e076a1986297e2d03e0cdec"
	I0815 00:08:08.659426   33429 logs.go:123] Gathering logs for coredns [4ba66a3367daf4b53a35259a09c9bd004e02804ed6293ed0baf74ac2ef4f06d3] ...
	I0815 00:08:08.659453   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ba66a3367daf4b53a35259a09c9bd004e02804ed6293ed0baf74ac2ef4f06d3"
	I0815 00:08:08.695176   33429 logs.go:123] Gathering logs for kube-scheduler [bd77b5ecfadb9e4b0eb677138b067975344fac163e26352db7d5ce14d50ed8f0] ...
	I0815 00:08:08.695200   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd77b5ecfadb9e4b0eb677138b067975344fac163e26352db7d5ce14d50ed8f0"
	I0815 00:08:08.732673   33429 logs.go:123] Gathering logs for kube-proxy [e5fd37ee5ee48f313a2bbb8de2bf49949a8ca143f909bf33e1d7a3ca648839a1] ...
	I0815 00:08:08.732699   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e5fd37ee5ee48f313a2bbb8de2bf49949a8ca143f909bf33e1d7a3ca648839a1"
	I0815 00:08:08.762290   33429 logs.go:123] Gathering logs for kindnet [17f6bd6dd22c5288ff6a4dc156ad4dd1d32d2895bfc8398e63705bc5613cf677] ...
	I0815 00:08:08.762314   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17f6bd6dd22c5288ff6a4dc156ad4dd1d32d2895bfc8398e63705bc5613cf677"
	I0815 00:08:11.299374   33429 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 00:08:11.302863   33429 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0815 00:08:11.303608   33429 api_server.go:141] control plane version: v1.31.0
	I0815 00:08:11.303629   33429 api_server.go:131] duration metric: took 3.30302873s to wait for apiserver health ...
	I0815 00:08:11.303638   33429 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 00:08:11.303662   33429 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 00:08:11.303715   33429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 00:08:11.335368   33429 cri.go:89] found id: "ea70e9f2778e6bfd0482547ef4e30fb6b1e37e3161a709b7913a069a0d6c1249"
	I0815 00:08:11.335387   33429 cri.go:89] found id: ""
	I0815 00:08:11.335394   33429 logs.go:276] 1 containers: [ea70e9f2778e6bfd0482547ef4e30fb6b1e37e3161a709b7913a069a0d6c1249]
	I0815 00:08:11.335433   33429 ssh_runner.go:195] Run: which crictl
	I0815 00:08:11.338517   33429 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 00:08:11.338588   33429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 00:08:11.368653   33429 cri.go:89] found id: "f16f228580088705d9d978dd7020a6f22abb4f154e076a1986297e2d03e0cdec"
	I0815 00:08:11.368675   33429 cri.go:89] found id: ""
	I0815 00:08:11.368682   33429 logs.go:276] 1 containers: [f16f228580088705d9d978dd7020a6f22abb4f154e076a1986297e2d03e0cdec]
	I0815 00:08:11.368727   33429 ssh_runner.go:195] Run: which crictl
	I0815 00:08:11.371711   33429 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 00:08:11.371762   33429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 00:08:11.403775   33429 cri.go:89] found id: "4ba66a3367daf4b53a35259a09c9bd004e02804ed6293ed0baf74ac2ef4f06d3"
	I0815 00:08:11.403798   33429 cri.go:89] found id: ""
	I0815 00:08:11.403808   33429 logs.go:276] 1 containers: [4ba66a3367daf4b53a35259a09c9bd004e02804ed6293ed0baf74ac2ef4f06d3]
	I0815 00:08:11.403853   33429 ssh_runner.go:195] Run: which crictl
	I0815 00:08:11.406855   33429 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 00:08:11.406913   33429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 00:08:11.437894   33429 cri.go:89] found id: "bd77b5ecfadb9e4b0eb677138b067975344fac163e26352db7d5ce14d50ed8f0"
	I0815 00:08:11.437911   33429 cri.go:89] found id: ""
	I0815 00:08:11.437918   33429 logs.go:276] 1 containers: [bd77b5ecfadb9e4b0eb677138b067975344fac163e26352db7d5ce14d50ed8f0]
	I0815 00:08:11.437963   33429 ssh_runner.go:195] Run: which crictl
	I0815 00:08:11.440939   33429 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 00:08:11.440996   33429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 00:08:11.472247   33429 cri.go:89] found id: "e5fd37ee5ee48f313a2bbb8de2bf49949a8ca143f909bf33e1d7a3ca648839a1"
	I0815 00:08:11.472267   33429 cri.go:89] found id: ""
	I0815 00:08:11.472274   33429 logs.go:276] 1 containers: [e5fd37ee5ee48f313a2bbb8de2bf49949a8ca143f909bf33e1d7a3ca648839a1]
	I0815 00:08:11.472312   33429 ssh_runner.go:195] Run: which crictl
	I0815 00:08:11.475285   33429 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 00:08:11.475339   33429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 00:08:11.505337   33429 cri.go:89] found id: "4043a5cc95e0b642f3d43fc29910c36484e438e6f5eb9296ff38dcbd460cb280"
	I0815 00:08:11.505359   33429 cri.go:89] found id: ""
	I0815 00:08:11.505367   33429 logs.go:276] 1 containers: [4043a5cc95e0b642f3d43fc29910c36484e438e6f5eb9296ff38dcbd460cb280]
	I0815 00:08:11.505419   33429 ssh_runner.go:195] Run: which crictl
	I0815 00:08:11.508302   33429 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 00:08:11.508356   33429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 00:08:11.539121   33429 cri.go:89] found id: "17f6bd6dd22c5288ff6a4dc156ad4dd1d32d2895bfc8398e63705bc5613cf677"
	I0815 00:08:11.539144   33429 cri.go:89] found id: ""
	I0815 00:08:11.539153   33429 logs.go:276] 1 containers: [17f6bd6dd22c5288ff6a4dc156ad4dd1d32d2895bfc8398e63705bc5613cf677]
	I0815 00:08:11.539199   33429 ssh_runner.go:195] Run: which crictl
	I0815 00:08:11.542054   33429 logs.go:123] Gathering logs for kubelet ...
	I0815 00:08:11.542077   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 00:08:11.611565   33429 logs.go:123] Gathering logs for dmesg ...
	I0815 00:08:11.611596   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 00:08:11.623230   33429 logs.go:123] Gathering logs for etcd [f16f228580088705d9d978dd7020a6f22abb4f154e076a1986297e2d03e0cdec] ...
	I0815 00:08:11.623255   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f16f228580088705d9d978dd7020a6f22abb4f154e076a1986297e2d03e0cdec"
	I0815 00:08:11.670940   33429 logs.go:123] Gathering logs for kindnet [17f6bd6dd22c5288ff6a4dc156ad4dd1d32d2895bfc8398e63705bc5613cf677] ...
	I0815 00:08:11.670967   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17f6bd6dd22c5288ff6a4dc156ad4dd1d32d2895bfc8398e63705bc5613cf677"
	I0815 00:08:11.706879   33429 logs.go:123] Gathering logs for container status ...
	I0815 00:08:11.706906   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 00:08:11.745902   33429 logs.go:123] Gathering logs for kube-controller-manager [4043a5cc95e0b642f3d43fc29910c36484e438e6f5eb9296ff38dcbd460cb280] ...
	I0815 00:08:11.745929   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4043a5cc95e0b642f3d43fc29910c36484e438e6f5eb9296ff38dcbd460cb280"
	I0815 00:08:11.802685   33429 logs.go:123] Gathering logs for CRI-O ...
	I0815 00:08:11.802714   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 00:08:11.873752   33429 logs.go:123] Gathering logs for describe nodes ...
	I0815 00:08:11.873781   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 00:08:11.962736   33429 logs.go:123] Gathering logs for kube-apiserver [ea70e9f2778e6bfd0482547ef4e30fb6b1e37e3161a709b7913a069a0d6c1249] ...
	I0815 00:08:11.962765   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea70e9f2778e6bfd0482547ef4e30fb6b1e37e3161a709b7913a069a0d6c1249"
	I0815 00:08:12.004013   33429 logs.go:123] Gathering logs for coredns [4ba66a3367daf4b53a35259a09c9bd004e02804ed6293ed0baf74ac2ef4f06d3] ...
	I0815 00:08:12.004041   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ba66a3367daf4b53a35259a09c9bd004e02804ed6293ed0baf74ac2ef4f06d3"
	I0815 00:08:12.039680   33429 logs.go:123] Gathering logs for kube-scheduler [bd77b5ecfadb9e4b0eb677138b067975344fac163e26352db7d5ce14d50ed8f0] ...
	I0815 00:08:12.039709   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd77b5ecfadb9e4b0eb677138b067975344fac163e26352db7d5ce14d50ed8f0"
	I0815 00:08:12.079354   33429 logs.go:123] Gathering logs for kube-proxy [e5fd37ee5ee48f313a2bbb8de2bf49949a8ca143f909bf33e1d7a3ca648839a1] ...
	I0815 00:08:12.079381   33429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e5fd37ee5ee48f313a2bbb8de2bf49949a8ca143f909bf33e1d7a3ca648839a1"
	I0815 00:08:14.620661   33429 system_pods.go:59] 19 kube-system pods found
	I0815 00:08:14.620688   33429 system_pods.go:61] "coredns-6f6b679f8f-c42pc" [c7d6d0e1-376e-4009-b23c-4ec563e9fb5c] Running
	I0815 00:08:14.620693   33429 system_pods.go:61] "csi-hostpath-attacher-0" [fc9a04f6-9b77-46c0-8179-7faf0b4d0508] Running
	I0815 00:08:14.620696   33429 system_pods.go:61] "csi-hostpath-resizer-0" [7832e4c7-4b14-4716-a24c-299d683020e7] Running
	I0815 00:08:14.620700   33429 system_pods.go:61] "csi-hostpathplugin-9bq4q" [20f345c9-95b5-4fdd-9b09-0ef44d9e025c] Running
	I0815 00:08:14.620703   33429 system_pods.go:61] "etcd-addons-877132" [c9fcbdb6-c56f-4565-955e-bd059a243317] Running
	I0815 00:08:14.620706   33429 system_pods.go:61] "kindnet-chbk7" [d5bb12f8-f766-4a6c-96d9-4a736660a5d4] Running
	I0815 00:08:14.620710   33429 system_pods.go:61] "kube-apiserver-addons-877132" [f11ef0cb-06f5-43c2-ab90-9e16415dfbdb] Running
	I0815 00:08:14.620715   33429 system_pods.go:61] "kube-controller-manager-addons-877132" [feefe7f6-b920-4abc-868e-c757b7f0611e] Running
	I0815 00:08:14.620719   33429 system_pods.go:61] "kube-ingress-dns-minikube" [a8fc2d7b-0cd2-425b-a632-15debd9dd0c7] Running
	I0815 00:08:14.620724   33429 system_pods.go:61] "kube-proxy-v6kx7" [ba0854ec-7db4-4e33-9e58-c440a176fab5] Running
	I0815 00:08:14.620728   33429 system_pods.go:61] "kube-scheduler-addons-877132" [711196de-fe86-4df3-9d53-f4e1ccd343e5] Running
	I0815 00:08:14.620733   33429 system_pods.go:61] "metrics-server-8988944d9-sgrxc" [39bb006b-3cb8-4b3f-bd6c-a14e00873f12] Running
	I0815 00:08:14.620741   33429 system_pods.go:61] "nvidia-device-plugin-daemonset-6d62n" [0b96b707-d892-4a7c-9728-5d4ddf5b5465] Running
	I0815 00:08:14.620747   33429 system_pods.go:61] "registry-6fb4cdfc84-r4n2w" [6ba345fc-6428-44c4-a39f-a525f747a85d] Running
	I0815 00:08:14.620755   33429 system_pods.go:61] "registry-proxy-9j2gn" [dafac940-abdc-432d-9a46-cf80da8907aa] Running
	I0815 00:08:14.620759   33429 system_pods.go:61] "snapshot-controller-56fcc65765-fcg26" [94c41682-f8b9-44c9-be9d-f4967e9d88fb] Running
	I0815 00:08:14.620762   33429 system_pods.go:61] "snapshot-controller-56fcc65765-gmh75" [8d111fc4-b50c-4b66-b7ed-f75310edc407] Running
	I0815 00:08:14.620765   33429 system_pods.go:61] "storage-provisioner" [da0204ad-464f-432a-8431-4e0541f190da] Running
	I0815 00:08:14.620771   33429 system_pods.go:61] "tiller-deploy-b48cc5f79-bthmf" [62d076df-bde8-40cf-ab28-b8fba5fea0d6] Running
	I0815 00:08:14.620777   33429 system_pods.go:74] duration metric: took 3.317132352s to wait for pod list to return data ...
	I0815 00:08:14.620786   33429 default_sa.go:34] waiting for default service account to be created ...
	I0815 00:08:14.623061   33429 default_sa.go:45] found service account: "default"
	I0815 00:08:14.623081   33429 default_sa.go:55] duration metric: took 2.290351ms for default service account to be created ...
	I0815 00:08:14.623090   33429 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 00:08:14.630696   33429 system_pods.go:86] 19 kube-system pods found
	I0815 00:08:14.630721   33429 system_pods.go:89] "coredns-6f6b679f8f-c42pc" [c7d6d0e1-376e-4009-b23c-4ec563e9fb5c] Running
	I0815 00:08:14.630729   33429 system_pods.go:89] "csi-hostpath-attacher-0" [fc9a04f6-9b77-46c0-8179-7faf0b4d0508] Running
	I0815 00:08:14.630735   33429 system_pods.go:89] "csi-hostpath-resizer-0" [7832e4c7-4b14-4716-a24c-299d683020e7] Running
	I0815 00:08:14.630741   33429 system_pods.go:89] "csi-hostpathplugin-9bq4q" [20f345c9-95b5-4fdd-9b09-0ef44d9e025c] Running
	I0815 00:08:14.630746   33429 system_pods.go:89] "etcd-addons-877132" [c9fcbdb6-c56f-4565-955e-bd059a243317] Running
	I0815 00:08:14.630752   33429 system_pods.go:89] "kindnet-chbk7" [d5bb12f8-f766-4a6c-96d9-4a736660a5d4] Running
	I0815 00:08:14.630758   33429 system_pods.go:89] "kube-apiserver-addons-877132" [f11ef0cb-06f5-43c2-ab90-9e16415dfbdb] Running
	I0815 00:08:14.630766   33429 system_pods.go:89] "kube-controller-manager-addons-877132" [feefe7f6-b920-4abc-868e-c757b7f0611e] Running
	I0815 00:08:14.630773   33429 system_pods.go:89] "kube-ingress-dns-minikube" [a8fc2d7b-0cd2-425b-a632-15debd9dd0c7] Running
	I0815 00:08:14.630783   33429 system_pods.go:89] "kube-proxy-v6kx7" [ba0854ec-7db4-4e33-9e58-c440a176fab5] Running
	I0815 00:08:14.630790   33429 system_pods.go:89] "kube-scheduler-addons-877132" [711196de-fe86-4df3-9d53-f4e1ccd343e5] Running
	I0815 00:08:14.630798   33429 system_pods.go:89] "metrics-server-8988944d9-sgrxc" [39bb006b-3cb8-4b3f-bd6c-a14e00873f12] Running
	I0815 00:08:14.630809   33429 system_pods.go:89] "nvidia-device-plugin-daemonset-6d62n" [0b96b707-d892-4a7c-9728-5d4ddf5b5465] Running
	I0815 00:08:14.630817   33429 system_pods.go:89] "registry-6fb4cdfc84-r4n2w" [6ba345fc-6428-44c4-a39f-a525f747a85d] Running
	I0815 00:08:14.630827   33429 system_pods.go:89] "registry-proxy-9j2gn" [dafac940-abdc-432d-9a46-cf80da8907aa] Running
	I0815 00:08:14.630834   33429 system_pods.go:89] "snapshot-controller-56fcc65765-fcg26" [94c41682-f8b9-44c9-be9d-f4967e9d88fb] Running
	I0815 00:08:14.630844   33429 system_pods.go:89] "snapshot-controller-56fcc65765-gmh75" [8d111fc4-b50c-4b66-b7ed-f75310edc407] Running
	I0815 00:08:14.630853   33429 system_pods.go:89] "storage-provisioner" [da0204ad-464f-432a-8431-4e0541f190da] Running
	I0815 00:08:14.630859   33429 system_pods.go:89] "tiller-deploy-b48cc5f79-bthmf" [62d076df-bde8-40cf-ab28-b8fba5fea0d6] Running
	I0815 00:08:14.630869   33429 system_pods.go:126] duration metric: took 7.771619ms to wait for k8s-apps to be running ...
	I0815 00:08:14.630880   33429 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 00:08:14.630927   33429 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 00:08:14.641293   33429 system_svc.go:56] duration metric: took 10.409007ms WaitForService to wait for kubelet
	I0815 00:08:14.641320   33429 kubeadm.go:582] duration metric: took 2m9.013343958s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 00:08:14.641343   33429 node_conditions.go:102] verifying NodePressure condition ...
	I0815 00:08:14.644057   33429 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0815 00:08:14.644080   33429 node_conditions.go:123] node cpu capacity is 8
	I0815 00:08:14.644090   33429 node_conditions.go:105] duration metric: took 2.743633ms to run NodePressure ...
	I0815 00:08:14.644101   33429 start.go:241] waiting for startup goroutines ...
	I0815 00:08:14.644107   33429 start.go:246] waiting for cluster config update ...
	I0815 00:08:14.644121   33429 start.go:255] writing updated cluster config ...
	I0815 00:08:14.644346   33429 ssh_runner.go:195] Run: rm -f paused
	I0815 00:08:14.690031   33429 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0815 00:08:14.691992   33429 out.go:177] * Done! kubectl is now configured to use "addons-877132" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 15 00:11:26 addons-877132 crio[1030]: time="2024-08-15 00:11:26.034203210Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7559cbf597-qfwsb from CNI network \"kindnet\" (type=ptp)"
	Aug 15 00:11:26 addons-877132 crio[1030]: time="2024-08-15 00:11:26.062960615Z" level=info msg="Stopped pod sandbox: e904b5086ff6b9ad611ea53e3260a51d8d9922116446bf06cf84b59b0dc131c4" id=cb383282-5d43-4287-8d1e-b6a4525c6ffc name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 15 00:11:26 addons-877132 crio[1030]: time="2024-08-15 00:11:26.380101929Z" level=info msg="Removing container: 8b6c013e33250c6bcb7d48fe79db0c35b0f196cc41a0c7b7457c198e31fee13d" id=1c2b606a-14e4-42f5-a1b2-78ab1b63db5f name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 15 00:11:26 addons-877132 crio[1030]: time="2024-08-15 00:11:26.392251595Z" level=info msg="Removed container 8b6c013e33250c6bcb7d48fe79db0c35b0f196cc41a0c7b7457c198e31fee13d: ingress-nginx/ingress-nginx-controller-7559cbf597-qfwsb/controller" id=1c2b606a-14e4-42f5-a1b2-78ab1b63db5f name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 15 00:12:00 addons-877132 crio[1030]: time="2024-08-15 00:12:00.326320901Z" level=info msg="Removing container: 639140631be8129256cf6ba2fde10f5b9b62bfa09a94037c425f7ef3814d5c6b" id=3f336cc2-d122-4382-8028-604ed51b6259 name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 15 00:12:00 addons-877132 crio[1030]: time="2024-08-15 00:12:00.338790647Z" level=info msg="Removed container 639140631be8129256cf6ba2fde10f5b9b62bfa09a94037c425f7ef3814d5c6b: ingress-nginx/ingress-nginx-admission-patch-pds8t/patch" id=3f336cc2-d122-4382-8028-604ed51b6259 name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 15 00:12:00 addons-877132 crio[1030]: time="2024-08-15 00:12:00.339809746Z" level=info msg="Removing container: ce1da5c1eb8b31b251f882d294031461ec13cf023ac5de123d7ef08d9baeb801" id=6c7d3092-7b83-4f01-8583-72c441c7c6b9 name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 15 00:12:00 addons-877132 crio[1030]: time="2024-08-15 00:12:00.354498616Z" level=info msg="Removed container ce1da5c1eb8b31b251f882d294031461ec13cf023ac5de123d7ef08d9baeb801: ingress-nginx/ingress-nginx-admission-create-6bdfx/create" id=6c7d3092-7b83-4f01-8583-72c441c7c6b9 name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 15 00:12:00 addons-877132 crio[1030]: time="2024-08-15 00:12:00.355677330Z" level=info msg="Stopping pod sandbox: 2787c4d975b2979a9fb201ca4551780a82ba9bc77ab17406b91d6825a5554495" id=49b63c26-48fe-46ce-bcba-b2bc1007250e name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 15 00:12:00 addons-877132 crio[1030]: time="2024-08-15 00:12:00.355707760Z" level=info msg="Stopped pod sandbox (already stopped): 2787c4d975b2979a9fb201ca4551780a82ba9bc77ab17406b91d6825a5554495" id=49b63c26-48fe-46ce-bcba-b2bc1007250e name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 15 00:12:00 addons-877132 crio[1030]: time="2024-08-15 00:12:00.355967838Z" level=info msg="Removing pod sandbox: 2787c4d975b2979a9fb201ca4551780a82ba9bc77ab17406b91d6825a5554495" id=b844525e-f7a3-4acb-98a9-df6763014c0a name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 15 00:12:00 addons-877132 crio[1030]: time="2024-08-15 00:12:00.361906145Z" level=info msg="Removed pod sandbox: 2787c4d975b2979a9fb201ca4551780a82ba9bc77ab17406b91d6825a5554495" id=b844525e-f7a3-4acb-98a9-df6763014c0a name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 15 00:12:00 addons-877132 crio[1030]: time="2024-08-15 00:12:00.362205096Z" level=info msg="Stopping pod sandbox: d13d9edd73ff2c5bc87166e0a955026882b882ea1b73cc83cae2386a11b38297" id=b3d4ff78-5275-4ed2-bdc6-82f406ef5e63 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 15 00:12:00 addons-877132 crio[1030]: time="2024-08-15 00:12:00.362238160Z" level=info msg="Stopped pod sandbox (already stopped): d13d9edd73ff2c5bc87166e0a955026882b882ea1b73cc83cae2386a11b38297" id=b3d4ff78-5275-4ed2-bdc6-82f406ef5e63 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 15 00:12:00 addons-877132 crio[1030]: time="2024-08-15 00:12:00.362490001Z" level=info msg="Removing pod sandbox: d13d9edd73ff2c5bc87166e0a955026882b882ea1b73cc83cae2386a11b38297" id=a2842046-b3f8-483c-a84c-3f7c200dd459 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 15 00:12:00 addons-877132 crio[1030]: time="2024-08-15 00:12:00.368152457Z" level=info msg="Removed pod sandbox: d13d9edd73ff2c5bc87166e0a955026882b882ea1b73cc83cae2386a11b38297" id=a2842046-b3f8-483c-a84c-3f7c200dd459 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 15 00:12:00 addons-877132 crio[1030]: time="2024-08-15 00:12:00.368422496Z" level=info msg="Stopping pod sandbox: 368d2541a5710cf8d55c78f6e53db9b1a286de3bd048f20ae02955cda12094e5" id=17799497-ef1b-4e44-bae9-9af854353118 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 15 00:12:00 addons-877132 crio[1030]: time="2024-08-15 00:12:00.368450012Z" level=info msg="Stopped pod sandbox (already stopped): 368d2541a5710cf8d55c78f6e53db9b1a286de3bd048f20ae02955cda12094e5" id=17799497-ef1b-4e44-bae9-9af854353118 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 15 00:12:00 addons-877132 crio[1030]: time="2024-08-15 00:12:00.368693630Z" level=info msg="Removing pod sandbox: 368d2541a5710cf8d55c78f6e53db9b1a286de3bd048f20ae02955cda12094e5" id=ea78c6de-d5c2-4a90-8225-919ea8d93acd name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 15 00:12:00 addons-877132 crio[1030]: time="2024-08-15 00:12:00.374189370Z" level=info msg="Removed pod sandbox: 368d2541a5710cf8d55c78f6e53db9b1a286de3bd048f20ae02955cda12094e5" id=ea78c6de-d5c2-4a90-8225-919ea8d93acd name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 15 00:12:00 addons-877132 crio[1030]: time="2024-08-15 00:12:00.374482642Z" level=info msg="Stopping pod sandbox: e904b5086ff6b9ad611ea53e3260a51d8d9922116446bf06cf84b59b0dc131c4" id=99c3e4e1-e7ef-4422-b3f2-24b82a3d4fd9 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 15 00:12:00 addons-877132 crio[1030]: time="2024-08-15 00:12:00.374516182Z" level=info msg="Stopped pod sandbox (already stopped): e904b5086ff6b9ad611ea53e3260a51d8d9922116446bf06cf84b59b0dc131c4" id=99c3e4e1-e7ef-4422-b3f2-24b82a3d4fd9 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 15 00:12:00 addons-877132 crio[1030]: time="2024-08-15 00:12:00.374758681Z" level=info msg="Removing pod sandbox: e904b5086ff6b9ad611ea53e3260a51d8d9922116446bf06cf84b59b0dc131c4" id=6b34dafc-e4f6-4a99-96ec-52f0b0e2fe03 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 15 00:12:00 addons-877132 crio[1030]: time="2024-08-15 00:12:00.380376572Z" level=info msg="Removed pod sandbox: e904b5086ff6b9ad611ea53e3260a51d8d9922116446bf06cf84b59b0dc131c4" id=6b34dafc-e4f6-4a99-96ec-52f0b0e2fe03 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 15 00:14:05 addons-877132 crio[1030]: time="2024-08-15 00:14:05.970157496Z" level=info msg="Stopping container: 70a331a39156226bf5ccd77f9124f42d9f00706ff9c6b97c68110d02ded4009b (timeout: 30s)" id=08efff01-63ec-440e-ac7b-9deb1406c65a name=/runtime.v1.RuntimeService/StopContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a77039a7ec091       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   e2e48d3540fb5       hello-world-app-55bf9c44b4-jw59v
	4700be58d0014       docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                         5 minutes ago       Running             nginx                     0                   59d991d01214e       nginx
	13564dbfd5a46       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     5 minutes ago       Running             busybox                   0                   75dbd279c225c       busybox
	70a331a391562       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   7 minutes ago       Running             metrics-server            0                   f0f38be4fe7eb       metrics-server-8988944d9-sgrxc
	dd563d287505a       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef        7 minutes ago       Running             local-path-provisioner    0                   e7ea6d4b55ddc       local-path-provisioner-86d989889c-zjfx8
	4ba66a3367daf       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        7 minutes ago       Running             coredns                   0                   5a0b205e08bed       coredns-6f6b679f8f-c42pc
	03e7fb303164d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        7 minutes ago       Running             storage-provisioner       0                   5106a78d90785       storage-provisioner
	17f6bd6dd22c5       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b                      7 minutes ago       Running             kindnet-cni               0                   dfc330b405c09       kindnet-chbk7
	e5fd37ee5ee48       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                                        7 minutes ago       Running             kube-proxy                0                   0a7cee1c53467       kube-proxy-v6kx7
	bd77b5ecfadb9       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                                        8 minutes ago       Running             kube-scheduler            0                   af893679823f2       kube-scheduler-addons-877132
	f16f228580088       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        8 minutes ago       Running             etcd                      0                   d052b7010e20a       etcd-addons-877132
	ea70e9f2778e6       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                                        8 minutes ago       Running             kube-apiserver            0                   9f6fa62a9f394       kube-apiserver-addons-877132
	4043a5cc95e0b       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                                        8 minutes ago       Running             kube-controller-manager   0                   2875939046c1e       kube-controller-manager-addons-877132
	
	
	==> coredns [4ba66a3367daf4b53a35259a09c9bd004e02804ed6293ed0baf74ac2ef4f06d3] <==
	[INFO] 10.244.0.2:47292 - 10763 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000113724s
	[INFO] 10.244.0.2:43420 - 43249 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,rd,ra 91 0.004319366s
	[INFO] 10.244.0.2:43420 - 28402 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,rd,ra 91 0.005696932s
	[INFO] 10.244.0.2:44876 - 48744 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005006536s
	[INFO] 10.244.0.2:44876 - 2155 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.018558843s
	[INFO] 10.244.0.2:58123 - 62458 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004398771s
	[INFO] 10.244.0.2:58123 - 33254 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004753437s
	[INFO] 10.244.0.2:33809 - 16532 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000082026s
	[INFO] 10.244.0.2:33809 - 31121 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000117847s
	[INFO] 10.244.0.20:57076 - 48443 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000153305s
	[INFO] 10.244.0.20:60786 - 431 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000243713s
	[INFO] 10.244.0.20:48667 - 18154 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000120297s
	[INFO] 10.244.0.20:60036 - 57584 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000157371s
	[INFO] 10.244.0.20:35143 - 29134 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000109833s
	[INFO] 10.244.0.20:54584 - 62281 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000111359s
	[INFO] 10.244.0.20:33107 - 11591 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.007471795s
	[INFO] 10.244.0.20:41578 - 30236 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.007561275s
	[INFO] 10.244.0.20:44643 - 23246 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.004784347s
	[INFO] 10.244.0.20:57858 - 50433 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.006739376s
	[INFO] 10.244.0.20:33262 - 49571 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00388293s
	[INFO] 10.244.0.20:54723 - 15767 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004555087s
	[INFO] 10.244.0.20:33820 - 52576 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.001059757s
	[INFO] 10.244.0.20:40606 - 49433 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001834019s
	[INFO] 10.244.0.26:35309 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000170417s
	[INFO] 10.244.0.26:52476 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000113322s
	
	
	==> describe nodes <==
	Name:               addons-877132
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-877132
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168
	                    minikube.k8s.io/name=addons-877132
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_15T00_06_00_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-877132
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 00:05:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-877132
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 00:14:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 00:11:37 +0000   Thu, 15 Aug 2024 00:05:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 00:11:37 +0000   Thu, 15 Aug 2024 00:05:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 00:11:37 +0000   Thu, 15 Aug 2024 00:05:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 00:11:37 +0000   Thu, 15 Aug 2024 00:06:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-877132
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	System Info:
	  Machine ID:                 7ac911fcfea74347829f75c9c0b9cec6
	  System UUID:                c27f2cf4-9042-4197-8c06-a1fdd73beeb7
	  Boot ID:                    adfcefd8-b451-4316-855f-752470c63d29
	  Kernel Version:             5.15.0-1066-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m53s
	  default                     hello-world-app-55bf9c44b4-jw59v           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m47s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m6s
	  kube-system                 coredns-6f6b679f8f-c42pc                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     8m1s
	  kube-system                 etcd-addons-877132                         100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         8m7s
	  kube-system                 kindnet-chbk7                              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      8m2s
	  kube-system                 kube-apiserver-addons-877132               250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m7s
	  kube-system                 kube-controller-manager-addons-877132      200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m7s
	  kube-system                 kube-proxy-v6kx7                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m2s
	  kube-system                 kube-scheduler-addons-877132               100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m7s
	  kube-system                 metrics-server-8988944d9-sgrxc             100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (0%!)(MISSING)       0 (0%!)(MISSING)         7m57s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m57s
	  local-path-storage          local-path-provisioner-86d989889c-zjfx8    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%!)(MISSING)  100m (1%!)(MISSING)
	  memory             420Mi (1%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m56s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  8m12s (x8 over 8m12s)  kubelet          Node addons-877132 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m12s (x8 over 8m12s)  kubelet          Node addons-877132 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m12s (x7 over 8m12s)  kubelet          Node addons-877132 status is now: NodeHasSufficientPID
	  Normal   Starting                 8m7s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m7s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  8m7s                   kubelet          Node addons-877132 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m7s                   kubelet          Node addons-877132 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m7s                   kubelet          Node addons-877132 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           8m3s                   node-controller  Node addons-877132 event: Registered Node addons-877132 in Controller
	  Normal   NodeReady                7m44s                  kubelet          Node addons-877132 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000630] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000616] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000605] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000609] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.594631] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.044972] systemd[1]: /lib/systemd/system/cloud-init-local.service:15: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.005902] systemd[1]: /lib/systemd/system/cloud-init.service:19: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.013048] systemd[1]: /lib/systemd/system/cloud-final.service:9: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.002588] systemd[1]: /lib/systemd/system/cloud-config.service:8: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.017548] systemd[1]: /lib/systemd/system/cloud-init.target:15: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +6.299942] kauditd_printk_skb: 46 callbacks suppressed
	[Aug15 00:09] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000014] ll header: 00000000: 96 a8 0a 24 52 dc de 81 16 9f 0c cd 08 00
	[  +1.000074] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: 96 a8 0a 24 52 dc de 81 16 9f 0c cd 08 00
	[  +2.015815] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: 96 a8 0a 24 52 dc de 81 16 9f 0c cd 08 00
	[  +4.255606] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 96 a8 0a 24 52 dc de 81 16 9f 0c cd 08 00
	[  +8.191208] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 96 a8 0a 24 52 dc de 81 16 9f 0c cd 08 00
	[ +16.126475] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 96 a8 0a 24 52 dc de 81 16 9f 0c cd 08 00
	[Aug15 00:10] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 96 a8 0a 24 52 dc de 81 16 9f 0c cd 08 00
	
	
	==> etcd [f16f228580088705d9d978dd7020a6f22abb4f154e076a1986297e2d03e0cdec] <==
	{"level":"warn","ts":"2024-08-15T00:06:09.661825Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.013375ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/tiller\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T00:06:09.661889Z","caller":"traceutil/trace.go:171","msg":"trace[2058467710] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/tiller; range_end:; response_count:0; response_revision:449; }","duration":"107.078526ms","start":"2024-08-15T00:06:09.554802Z","end":"2024-08-15T00:06:09.661881Z","steps":["trace[2058467710] 'agreement among raft nodes before linearized reading'  (duration: 106.974073ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T00:06:09.960832Z","caller":"traceutil/trace.go:171","msg":"trace[1035712257] linearizableReadLoop","detail":"{readStateIndex:469; appliedIndex:468; }","duration":"186.731601ms","start":"2024-08-15T00:06:09.774078Z","end":"2024-08-15T00:06:09.960809Z","steps":["trace[1035712257] 'read index received'  (duration: 183.924781ms)","trace[1035712257] 'applied index is now lower than readState.Index'  (duration: 2.806055ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-15T00:06:09.961756Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"200.995899ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T00:06:09.962261Z","caller":"traceutil/trace.go:171","msg":"trace[356290812] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:458; }","duration":"201.072963ms","start":"2024-08-15T00:06:09.760739Z","end":"2024-08-15T00:06:09.961812Z","steps":["trace[356290812] 'agreement among raft nodes before linearized reading'  (duration: 200.95514ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T00:06:09.962450Z","caller":"traceutil/trace.go:171","msg":"trace[1489601163] transaction","detail":"{read_only:false; response_revision:458; number_of_response:1; }","duration":"197.664471ms","start":"2024-08-15T00:06:09.764761Z","end":"2024-08-15T00:06:09.962426Z","steps":["trace[1489601163] 'process raft request'  (duration: 192.720086ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T00:06:10.158962Z","caller":"traceutil/trace.go:171","msg":"trace[1685509554] transaction","detail":"{read_only:false; response_revision:464; number_of_response:1; }","duration":"101.328154ms","start":"2024-08-15T00:06:10.057619Z","end":"2024-08-15T00:06:10.158947Z","steps":["trace[1685509554] 'process raft request'  (duration: 98.627594ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T00:06:10.760371Z","caller":"traceutil/trace.go:171","msg":"trace[109234857] transaction","detail":"{read_only:false; response_revision:513; number_of_response:1; }","duration":"184.795977ms","start":"2024-08-15T00:06:10.575558Z","end":"2024-08-15T00:06:10.760354Z","steps":["trace[109234857] 'process raft request'  (duration: 178.834908ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T00:06:10.760803Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.370207ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/kube-system/registry-6fb4cdfc84\" ","response":"range_response_count:1 size:2551"}
	{"level":"info","ts":"2024-08-15T00:06:10.760848Z","caller":"traceutil/trace.go:171","msg":"trace[1951375711] range","detail":"{range_begin:/registry/replicasets/kube-system/registry-6fb4cdfc84; range_end:; response_count:1; response_revision:515; }","duration":"103.424885ms","start":"2024-08-15T00:06:10.657413Z","end":"2024-08-15T00:06:10.760838Z","steps":["trace[1951375711] 'agreement among raft nodes before linearized reading'  (duration: 103.276902ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T00:06:10.760670Z","caller":"traceutil/trace.go:171","msg":"trace[203738500] linearizableReadLoop","detail":"{readStateIndex:525; appliedIndex:524; }","duration":"103.241482ms","start":"2024-08-15T00:06:10.657417Z","end":"2024-08-15T00:06:10.760659Z","steps":["trace[203738500] 'read index received'  (duration: 96.983851ms)","trace[203738500] 'applied index is now lower than readState.Index'  (duration: 6.256736ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-15T00:06:10.761005Z","caller":"traceutil/trace.go:171","msg":"trace[874513999] transaction","detail":"{read_only:false; response_revision:514; number_of_response:1; }","duration":"103.439543ms","start":"2024-08-15T00:06:10.657558Z","end":"2024-08-15T00:06:10.760998Z","steps":["trace[874513999] 'process raft request'  (duration: 102.874463ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T00:06:10.764836Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.833354ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/yakd-dashboard\" ","response":"range_response_count:1 size:883"}
	{"level":"info","ts":"2024-08-15T00:06:10.764930Z","caller":"traceutil/trace.go:171","msg":"trace[798552039] range","detail":"{range_begin:/registry/namespaces/yakd-dashboard; range_end:; response_count:1; response_revision:516; }","duration":"102.934075ms","start":"2024-08-15T00:06:10.661985Z","end":"2024-08-15T00:06:10.764919Z","steps":["trace[798552039] 'agreement among raft nodes before linearized reading'  (duration: 102.728931ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T00:06:10.765323Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.385976ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-877132\" ","response":"range_response_count:1 size:5648"}
	{"level":"info","ts":"2024-08-15T00:06:10.765416Z","caller":"traceutil/trace.go:171","msg":"trace[306473127] range","detail":"{range_begin:/registry/minions/addons-877132; range_end:; response_count:1; response_revision:516; }","duration":"100.480741ms","start":"2024-08-15T00:06:10.664926Z","end":"2024-08-15T00:06:10.765407Z","steps":["trace[306473127] 'agreement among raft nodes before linearized reading'  (duration: 100.370266ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T00:07:08.287875Z","caller":"traceutil/trace.go:171","msg":"trace[1444096404] linearizableReadLoop","detail":"{readStateIndex:1251; appliedIndex:1250; }","duration":"114.749683ms","start":"2024-08-15T00:07:08.173107Z","end":"2024-08-15T00:07:08.287857Z","steps":["trace[1444096404] 'read index received'  (duration: 114.546126ms)","trace[1444096404] 'applied index is now lower than readState.Index'  (duration: 202.358µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-15T00:07:08.287946Z","caller":"traceutil/trace.go:171","msg":"trace[1350768147] transaction","detail":"{read_only:false; response_revision:1219; number_of_response:1; }","duration":"116.664519ms","start":"2024-08-15T00:07:08.171264Z","end":"2024-08-15T00:07:08.287929Z","steps":["trace[1350768147] 'process raft request'  (duration: 116.440866ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T00:07:08.288062Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"114.93862ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-877132\" ","response":"range_response_count:1 size:9170"}
	{"level":"info","ts":"2024-08-15T00:07:08.288095Z","caller":"traceutil/trace.go:171","msg":"trace[205809032] range","detail":"{range_begin:/registry/minions/addons-877132; range_end:; response_count:1; response_revision:1219; }","duration":"114.983891ms","start":"2024-08-15T00:07:08.173100Z","end":"2024-08-15T00:07:08.288084Z","steps":["trace[205809032] 'agreement among raft nodes before linearized reading'  (duration: 114.848983ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T00:07:19.613367Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.767064ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128031214691300417 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:70cc91535af63440>","response":"size:41"}
	{"level":"info","ts":"2024-08-15T00:07:19.763949Z","caller":"traceutil/trace.go:171","msg":"trace[1049542164] transaction","detail":"{read_only:false; response_revision:1244; number_of_response:1; }","duration":"192.04081ms","start":"2024-08-15T00:07:19.571892Z","end":"2024-08-15T00:07:19.763933Z","steps":["trace[1049542164] 'process raft request'  (duration: 169.989844ms)","trace[1049542164] 'compare'  (duration: 21.954224ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-15T00:07:19.765023Z","caller":"traceutil/trace.go:171","msg":"trace[49433671] transaction","detail":"{read_only:false; response_revision:1245; number_of_response:1; }","duration":"150.971344ms","start":"2024-08-15T00:07:19.614039Z","end":"2024-08-15T00:07:19.765010Z","steps":["trace[49433671] 'process raft request'  (duration: 150.874636ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T00:09:20.066162Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.995095ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T00:09:20.066232Z","caller":"traceutil/trace.go:171","msg":"trace[1279599202] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1860; }","duration":"108.076285ms","start":"2024-08-15T00:09:19.958140Z","end":"2024-08-15T00:09:20.066216Z","steps":["trace[1279599202] 'range keys from in-memory index tree'  (duration: 107.949119ms)"],"step_count":1}
	
	
	==> kernel <==
	 00:14:07 up  1:56,  0 users,  load average: 0.10, 0.31, 0.27
	Linux addons-877132 5.15.0-1066-gcp #74~20.04.1-Ubuntu SMP Fri Jul 26 09:28:41 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [17f6bd6dd22c5288ff6a4dc156ad4dd1d32d2895bfc8398e63705bc5613cf677] <==
	E0815 00:12:52.572198       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0815 00:12:53.555439       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0815 00:12:53.555471       1 main.go:299] handling current node
	I0815 00:13:03.555291       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0815 00:13:03.555324       1 main.go:299] handling current node
	I0815 00:13:13.555483       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0815 00:13:13.555522       1 main.go:299] handling current node
	W0815 00:13:15.389887       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0815 00:13:15.389921       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0815 00:13:23.555479       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0815 00:13:23.555513       1 main.go:299] handling current node
	I0815 00:13:33.555293       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0815 00:13:33.555325       1 main.go:299] handling current node
	W0815 00:13:33.566915       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0815 00:13:33.566948       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	W0815 00:13:42.033028       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0815 00:13:42.033058       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0815 00:13:43.555921       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0815 00:13:43.555960       1 main.go:299] handling current node
	W0815 00:13:49.511670       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0815 00:13:49.511703       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0815 00:13:53.555722       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0815 00:13:53.555761       1 main.go:299] handling current node
	I0815 00:14:03.555442       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0815 00:14:03.555480       1 main.go:299] handling current node
	
	
	==> kube-apiserver [ea70e9f2778e6bfd0482547ef4e30fb6b1e37e3161a709b7913a069a0d6c1249] <==
	I0815 00:08:04.597292       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0815 00:08:22.099434       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:55458: use of closed network connection
	E0815 00:08:22.246304       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:55482: use of closed network connection
	E0815 00:08:50.694888       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.49.2:8443->10.244.0.28:40444: read: connection reset by peer
	I0815 00:08:53.212855       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.106.166.95"}
	I0815 00:08:55.861522       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0815 00:08:56.104430       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0815 00:08:57.263455       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0815 00:09:01.545972       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0815 00:09:01.698738       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.145.87"}
	I0815 00:09:30.366936       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0815 00:09:30.366998       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0815 00:09:30.378927       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0815 00:09:30.378969       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0815 00:09:30.382001       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0815 00:09:30.382053       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0815 00:09:30.391883       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0815 00:09:30.392003       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0815 00:09:30.498706       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0815 00:09:30.498743       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0815 00:09:31.382946       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0815 00:09:31.499329       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0815 00:09:31.509054       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0815 00:11:20.893935       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.109.70.207"}
	E0815 00:11:22.915796       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [4043a5cc95e0b642f3d43fc29910c36484e438e6f5eb9296ff38dcbd460cb280] <==
	W0815 00:12:28.224607       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 00:12:28.224647       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 00:12:35.817251       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 00:12:35.817289       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 00:12:40.424175       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 00:12:40.424212       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 00:12:45.431180       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 00:12:45.431221       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 00:13:07.962980       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 00:13:07.963022       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 00:13:19.094179       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 00:13:19.094216       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 00:13:25.771996       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 00:13:25.772037       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 00:13:26.160402       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 00:13:26.160442       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 00:13:54.135540       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 00:13:54.135598       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 00:14:04.889932       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 00:14:04.889972       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 00:14:05.700110       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 00:14:05.700147       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0815 00:14:05.957303       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-8988944d9" duration="8.411µs"
	W0815 00:14:06.133304       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 00:14:06.133341       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [e5fd37ee5ee48f313a2bbb8de2bf49949a8ca143f909bf33e1d7a3ca648839a1] <==
	I0815 00:06:08.771141       1 server_linux.go:66] "Using iptables proxy"
	I0815 00:06:09.677856       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0815 00:06:09.677943       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 00:06:10.256713       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0815 00:06:10.256859       1 server_linux.go:169] "Using iptables Proxier"
	I0815 00:06:10.265814       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 00:06:10.266511       1 server.go:483] "Version info" version="v1.31.0"
	I0815 00:06:10.266784       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 00:06:10.268405       1 config.go:197] "Starting service config controller"
	I0815 00:06:10.269673       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 00:06:10.269385       1 config.go:326] "Starting node config controller"
	I0815 00:06:10.269842       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 00:06:10.268940       1 config.go:104] "Starting endpoint slice config controller"
	I0815 00:06:10.269934       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 00:06:10.370849       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0815 00:06:10.373009       1 shared_informer.go:320] Caches are synced for service config
	I0815 00:06:10.373024       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [bd77b5ecfadb9e4b0eb677138b067975344fac163e26352db7d5ce14d50ed8f0] <==
	W0815 00:05:58.063472       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0815 00:05:58.063858       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 00:05:58.063491       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0815 00:05:58.063889       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 00:05:58.063534       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0815 00:05:58.063914       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0815 00:05:58.063535       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0815 00:05:58.063935       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 00:05:58.063800       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0815 00:05:58.063952       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0815 00:05:58.984099       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0815 00:05:58.984141       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 00:05:58.991354       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0815 00:05:58.991389       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 00:05:59.015615       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0815 00:05:59.015655       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0815 00:05:59.046949       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0815 00:05:59.046980       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0815 00:05:59.062581       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0815 00:05:59.062629       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 00:05:59.078903       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0815 00:05:59.078935       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 00:05:59.094087       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0815 00:05:59.094126       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0815 00:06:01.762374       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 15 00:12:30 addons-877132 kubelet[1633]: E0815 00:12:30.325989    1633 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723680750325741473,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:613264,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:12:30 addons-877132 kubelet[1633]: E0815 00:12:30.326020    1633 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723680750325741473,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:613264,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:12:40 addons-877132 kubelet[1633]: E0815 00:12:40.329122    1633 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723680760328854294,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:613264,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:12:40 addons-877132 kubelet[1633]: E0815 00:12:40.329154    1633 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723680760328854294,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:613264,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:12:50 addons-877132 kubelet[1633]: E0815 00:12:50.331698    1633 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723680770331505688,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:613264,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:12:50 addons-877132 kubelet[1633]: E0815 00:12:50.331727    1633 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723680770331505688,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:613264,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:13:00 addons-877132 kubelet[1633]: E0815 00:13:00.333845    1633 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723680780333587140,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:613264,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:13:00 addons-877132 kubelet[1633]: E0815 00:13:00.333876    1633 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723680780333587140,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:613264,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:13:10 addons-877132 kubelet[1633]: E0815 00:13:10.336259    1633 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723680790336026339,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:613264,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:13:10 addons-877132 kubelet[1633]: E0815 00:13:10.336292    1633 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723680790336026339,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:613264,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:13:20 addons-877132 kubelet[1633]: E0815 00:13:20.338515    1633 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723680800338305471,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:613264,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:13:20 addons-877132 kubelet[1633]: E0815 00:13:20.338555    1633 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723680800338305471,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:613264,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:13:27 addons-877132 kubelet[1633]: I0815 00:13:27.070703    1633 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Aug 15 00:13:30 addons-877132 kubelet[1633]: E0815 00:13:30.340457    1633 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723680810340234419,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:613264,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:13:30 addons-877132 kubelet[1633]: E0815 00:13:30.340492    1633 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723680810340234419,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:613264,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:13:40 addons-877132 kubelet[1633]: E0815 00:13:40.343379    1633 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723680820343147815,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:613264,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:13:40 addons-877132 kubelet[1633]: E0815 00:13:40.343408    1633 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723680820343147815,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:613264,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:13:50 addons-877132 kubelet[1633]: E0815 00:13:50.345578    1633 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723680830345341187,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:613264,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:13:50 addons-877132 kubelet[1633]: E0815 00:13:50.345607    1633 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723680830345341187,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:613264,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:14:00 addons-877132 kubelet[1633]: E0815 00:14:00.347482    1633 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723680840347265171,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:613264,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:14:00 addons-877132 kubelet[1633]: E0815 00:14:00.347510    1633 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723680840347265171,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:613264,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:14:07 addons-877132 kubelet[1633]: I0815 00:14:07.281758    1633 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrfbs\" (UniqueName: \"kubernetes.io/projected/39bb006b-3cb8-4b3f-bd6c-a14e00873f12-kube-api-access-wrfbs\") pod \"39bb006b-3cb8-4b3f-bd6c-a14e00873f12\" (UID: \"39bb006b-3cb8-4b3f-bd6c-a14e00873f12\") "
	Aug 15 00:14:07 addons-877132 kubelet[1633]: I0815 00:14:07.281832    1633 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/39bb006b-3cb8-4b3f-bd6c-a14e00873f12-tmp-dir\") pod \"39bb006b-3cb8-4b3f-bd6c-a14e00873f12\" (UID: \"39bb006b-3cb8-4b3f-bd6c-a14e00873f12\") "
	Aug 15 00:14:07 addons-877132 kubelet[1633]: I0815 00:14:07.282117    1633 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39bb006b-3cb8-4b3f-bd6c-a14e00873f12-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "39bb006b-3cb8-4b3f-bd6c-a14e00873f12" (UID: "39bb006b-3cb8-4b3f-bd6c-a14e00873f12"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Aug 15 00:14:07 addons-877132 kubelet[1633]: I0815 00:14:07.283480    1633 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39bb006b-3cb8-4b3f-bd6c-a14e00873f12-kube-api-access-wrfbs" (OuterVolumeSpecName: "kube-api-access-wrfbs") pod "39bb006b-3cb8-4b3f-bd6c-a14e00873f12" (UID: "39bb006b-3cb8-4b3f-bd6c-a14e00873f12"). InnerVolumeSpecName "kube-api-access-wrfbs". PluginName "kubernetes.io/projected", VolumeGidValue ""
	
	
	==> storage-provisioner [03e7fb303164d2ef427adb835d31e224473b8f74e6cf70ad41f8bf76d02c9292] <==
	I0815 00:06:24.994803       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0815 00:06:25.003166       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0815 00:06:25.003199       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0815 00:06:25.009325       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0815 00:06:25.009364       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7744846a-ef9e-42e9-90e6-1e26a8341167", APIVersion:"v1", ResourceVersion:"943", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-877132_ee44d2d3-4c44-4a23-bef6-1d5ee9ac4c4c became leader
	I0815 00:06:25.009462       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-877132_ee44d2d3-4c44-4a23-bef6-1d5ee9ac4c4c!
	I0815 00:06:25.110558       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-877132_ee44d2d3-4c44-4a23-bef6-1d5ee9ac4c4c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-877132 -n addons-877132
helpers_test.go:261: (dbg) Run:  kubectl --context addons-877132 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (325.75s)

                                                
                                    

Test pass (301/328)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 7.63
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.19
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.0/json-events 4.8
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.06
18 TestDownloadOnly/v1.31.0/DeleteAll 0.18
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.12
20 TestDownloadOnlyKic 1.03
21 TestBinaryMirror 0.72
22 TestOffline 56.39
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 171.08
31 TestAddons/serial/GCPAuth/Namespaces 0.13
33 TestAddons/parallel/Registry 14.42
35 TestAddons/parallel/InspektorGadget 10.8
37 TestAddons/parallel/HelmTiller 8.58
39 TestAddons/parallel/CSI 60.2
40 TestAddons/parallel/Headlamp 15.4
41 TestAddons/parallel/CloudSpanner 5.61
42 TestAddons/parallel/LocalPath 8.03
43 TestAddons/parallel/NvidiaDevicePlugin 5.45
44 TestAddons/parallel/Yakd 11.61
45 TestAddons/StoppedEnableDisable 12.01
46 TestCertOptions 29.31
47 TestCertExpiration 218.99
49 TestForceSystemdFlag 24.83
50 TestForceSystemdEnv 41.36
52 TestKVMDriverInstallOrUpdate 1.19
56 TestErrorSpam/setup 23.15
57 TestErrorSpam/start 0.53
58 TestErrorSpam/status 0.82
59 TestErrorSpam/pause 1.43
60 TestErrorSpam/unpause 1.57
61 TestErrorSpam/stop 1.32
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 39.01
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 40.8
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.07
72 TestFunctional/serial/CacheCmd/cache/add_remote 2.75
73 TestFunctional/serial/CacheCmd/cache/add_local 0.92
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.04
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.25
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.58
78 TestFunctional/serial/CacheCmd/cache/delete 0.09
79 TestFunctional/serial/MinikubeKubectlCmd 0.1
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.09
81 TestFunctional/serial/ExtraConfig 33.43
82 TestFunctional/serial/ComponentHealth 0.06
83 TestFunctional/serial/LogsCmd 1.23
84 TestFunctional/serial/LogsFileCmd 1.25
85 TestFunctional/serial/InvalidService 3.77
87 TestFunctional/parallel/ConfigCmd 0.38
88 TestFunctional/parallel/DashboardCmd 7.64
89 TestFunctional/parallel/DryRun 0.45
90 TestFunctional/parallel/InternationalLanguage 0.16
91 TestFunctional/parallel/StatusCmd 1.27
95 TestFunctional/parallel/ServiceCmdConnect 7.9
96 TestFunctional/parallel/AddonsCmd 0.15
97 TestFunctional/parallel/PersistentVolumeClaim 29.53
99 TestFunctional/parallel/SSHCmd 0.52
100 TestFunctional/parallel/CpCmd 1.93
101 TestFunctional/parallel/MySQL 18.78
102 TestFunctional/parallel/FileSync 0.27
103 TestFunctional/parallel/CertSync 1.66
107 TestFunctional/parallel/NodeLabels 0.06
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.53
111 TestFunctional/parallel/License 0.22
112 TestFunctional/parallel/ServiceCmd/DeployApp 9.21
113 TestFunctional/parallel/ProfileCmd/profile_not_create 0.42
114 TestFunctional/parallel/MountCmd/any-port 7.16
115 TestFunctional/parallel/ProfileCmd/profile_list 0.38
116 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
117 TestFunctional/parallel/Version/short 0.05
118 TestFunctional/parallel/Version/components 0.57
119 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
120 TestFunctional/parallel/ImageCommands/ImageListTable 0.38
121 TestFunctional/parallel/ImageCommands/ImageListJson 0.32
122 TestFunctional/parallel/ImageCommands/ImageListYaml 0.4
123 TestFunctional/parallel/ImageCommands/ImageBuild 2.8
124 TestFunctional/parallel/ImageCommands/Setup 0.41
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.41
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.91
127 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.4
128 TestFunctional/parallel/MountCmd/specific-port 1.64
129 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.51
130 TestFunctional/parallel/ImageCommands/ImageRemove 0.79
131 TestFunctional/parallel/MountCmd/VerifyCleanup 1.74
132 TestFunctional/parallel/ServiceCmd/List 0.64
133 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.77
134 TestFunctional/parallel/ServiceCmd/JSONOutput 0.51
135 TestFunctional/parallel/ServiceCmd/HTTPS 0.37
136 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.57
137 TestFunctional/parallel/ServiceCmd/Format 0.35
138 TestFunctional/parallel/UpdateContextCmd/no_changes 0.12
139 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.12
140 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.13
141 TestFunctional/parallel/ServiceCmd/URL 0.35
143 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.52
144 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
146 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 17.28
147 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
148 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
152 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
153 TestFunctional/delete_echo-server_images 0.06
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 98.56
160 TestMultiControlPlane/serial/DeployApp 3.73
161 TestMultiControlPlane/serial/PingHostFromPods 0.94
162 TestMultiControlPlane/serial/AddWorkerNode 33.1
163 TestMultiControlPlane/serial/NodeLabels 0.06
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.61
165 TestMultiControlPlane/serial/CopyFile 15.14
166 TestMultiControlPlane/serial/StopSecondaryNode 12.42
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.47
168 TestMultiControlPlane/serial/RestartSecondaryNode 31.66
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 6.39
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 149.86
171 TestMultiControlPlane/serial/DeleteSecondaryNode 11.18
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.44
173 TestMultiControlPlane/serial/StopCluster 35.45
174 TestMultiControlPlane/serial/RestartCluster 103.77
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.44
176 TestMultiControlPlane/serial/AddSecondaryNode 39.24
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.61
181 TestJSONOutput/start/Command 38.82
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.63
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.56
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 5.69
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.18
206 TestKicCustomNetwork/create_custom_network 26.71
207 TestKicCustomNetwork/use_default_bridge_network 22.89
208 TestKicExistingNetwork 25.55
209 TestKicCustomSubnet 22.74
210 TestKicStaticIP 22.69
211 TestMainNoArgs 0.04
212 TestMinikubeProfile 52.92
215 TestMountStart/serial/StartWithMountFirst 5.41
216 TestMountStart/serial/VerifyMountFirst 0.23
217 TestMountStart/serial/StartWithMountSecond 7.94
218 TestMountStart/serial/VerifyMountSecond 0.24
219 TestMountStart/serial/DeleteFirst 1.56
220 TestMountStart/serial/VerifyMountPostDelete 0.23
221 TestMountStart/serial/Stop 1.16
222 TestMountStart/serial/RestartStopped 7.05
223 TestMountStart/serial/VerifyMountPostStop 0.24
226 TestMultiNode/serial/FreshStart2Nodes 63.9
227 TestMultiNode/serial/DeployApp2Nodes 2.97
228 TestMultiNode/serial/PingHostFrom2Pods 0.65
229 TestMultiNode/serial/AddNode 25.73
230 TestMultiNode/serial/MultiNodeLabels 0.06
231 TestMultiNode/serial/ProfileList 0.27
232 TestMultiNode/serial/CopyFile 8.6
233 TestMultiNode/serial/StopNode 2.04
234 TestMultiNode/serial/StartAfterStop 9.08
235 TestMultiNode/serial/RestartKeepsNodes 101.68
236 TestMultiNode/serial/DeleteNode 5.15
237 TestMultiNode/serial/StopMultiNode 23.6
238 TestMultiNode/serial/RestartMultiNode 50.15
239 TestMultiNode/serial/ValidateNameConflict 21.86
244 TestPreload 99.04
246 TestScheduledStopUnix 98.69
249 TestInsufficientStorage 9.43
250 TestRunningBinaryUpgrade 54.5
252 TestKubernetesUpgrade 356.32
253 TestMissingContainerUpgrade 129.78
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
256 TestNoKubernetes/serial/StartWithK8s 38.21
257 TestNoKubernetes/serial/StartWithStopK8s 7.48
258 TestNoKubernetes/serial/Start 4.63
259 TestNoKubernetes/serial/VerifyK8sNotRunning 0.26
260 TestNoKubernetes/serial/ProfileList 7.19
261 TestNoKubernetes/serial/Stop 1.9
262 TestNoKubernetes/serial/StartNoArgs 6.6
263 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
271 TestStoppedBinaryUpgrade/Setup 0.51
272 TestStoppedBinaryUpgrade/Upgrade 57.37
273 TestStoppedBinaryUpgrade/MinikubeLogs 0.74
281 TestNetworkPlugins/group/false 3.3
286 TestPause/serial/Start 47.2
288 TestStartStop/group/old-k8s-version/serial/FirstStart 128.05
289 TestPause/serial/SecondStartNoReconfiguration 30.77
290 TestPause/serial/Pause 0.66
291 TestPause/serial/VerifyStatus 0.29
292 TestPause/serial/Unpause 0.6
293 TestPause/serial/PauseAgain 0.71
294 TestPause/serial/DeletePaused 2.54
295 TestPause/serial/VerifyDeletedResources 14.75
297 TestStartStop/group/no-preload/serial/FirstStart 57.87
299 TestStartStop/group/embed-certs/serial/FirstStart 44.85
300 TestStartStop/group/old-k8s-version/serial/DeployApp 9.39
301 TestStartStop/group/embed-certs/serial/DeployApp 8.24
302 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.8
303 TestStartStop/group/old-k8s-version/serial/Stop 11.97
304 TestStartStop/group/no-preload/serial/DeployApp 9.24
305 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.79
306 TestStartStop/group/embed-certs/serial/Stop 11.81
307 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.91
308 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.16
309 TestStartStop/group/old-k8s-version/serial/SecondStart 144.7
310 TestStartStop/group/no-preload/serial/Stop 11.86
311 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
312 TestStartStop/group/embed-certs/serial/SecondStart 276.97
313 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
314 TestStartStop/group/no-preload/serial/SecondStart 262.72
316 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 43.37
317 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.26
318 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.84
319 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.82
320 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
321 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
322 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.15
323 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 263.26
324 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.21
325 TestStartStop/group/old-k8s-version/serial/Pause 2.43
327 TestStartStop/group/newest-cni/serial/FirstStart 30.35
328 TestStartStop/group/newest-cni/serial/DeployApp 0
329 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.03
330 TestStartStop/group/newest-cni/serial/Stop 1.33
331 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.15
332 TestStartStop/group/newest-cni/serial/SecondStart 13.65
333 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
334 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
335 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.21
336 TestStartStop/group/newest-cni/serial/Pause 2.49
337 TestNetworkPlugins/group/auto/Start 43.45
338 TestNetworkPlugins/group/auto/KubeletFlags 0.25
339 TestNetworkPlugins/group/auto/NetCatPod 8.18
340 TestNetworkPlugins/group/auto/DNS 0.13
341 TestNetworkPlugins/group/auto/Localhost 0.11
342 TestNetworkPlugins/group/auto/HairPin 0.11
343 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
344 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
345 TestNetworkPlugins/group/kindnet/Start 42.49
346 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
347 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
348 TestStartStop/group/no-preload/serial/Pause 3.14
349 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
350 TestNetworkPlugins/group/calico/Start 56.22
351 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
352 TestStartStop/group/embed-certs/serial/Pause 3.08
353 TestNetworkPlugins/group/custom-flannel/Start 45.52
354 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
355 TestNetworkPlugins/group/kindnet/KubeletFlags 0.27
356 TestNetworkPlugins/group/kindnet/NetCatPod 8.19
357 TestNetworkPlugins/group/kindnet/DNS 0.12
358 TestNetworkPlugins/group/kindnet/Localhost 0.1
359 TestNetworkPlugins/group/kindnet/HairPin 0.1
360 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.27
361 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.25
362 TestNetworkPlugins/group/calico/ControllerPod 6.01
363 TestNetworkPlugins/group/calico/KubeletFlags 0.27
364 TestNetworkPlugins/group/calico/NetCatPod 12.18
365 TestNetworkPlugins/group/custom-flannel/DNS 0.2
366 TestNetworkPlugins/group/custom-flannel/Localhost 0.1
367 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
368 TestNetworkPlugins/group/enable-default-cni/Start 33.76
369 TestNetworkPlugins/group/calico/DNS 0.14
370 TestNetworkPlugins/group/calico/Localhost 0.12
371 TestNetworkPlugins/group/calico/HairPin 0.11
372 TestNetworkPlugins/group/flannel/Start 47.5
373 TestNetworkPlugins/group/bridge/Start 62.11
374 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.32
375 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.27
376 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
377 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
378 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
379 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
380 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
381 TestNetworkPlugins/group/flannel/ControllerPod 6.01
382 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.22
383 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.48
384 TestNetworkPlugins/group/flannel/KubeletFlags 0.26
385 TestNetworkPlugins/group/flannel/NetCatPod 9.18
386 TestNetworkPlugins/group/flannel/DNS 0.12
387 TestNetworkPlugins/group/flannel/Localhost 0.1
388 TestNetworkPlugins/group/flannel/HairPin 0.1
389 TestNetworkPlugins/group/bridge/KubeletFlags 0.28
390 TestNetworkPlugins/group/bridge/NetCatPod 9.2
391 TestNetworkPlugins/group/bridge/DNS 0.11
392 TestNetworkPlugins/group/bridge/Localhost 0.11
393 TestNetworkPlugins/group/bridge/HairPin 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (7.63s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-300790 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-300790 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (7.630100446s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (7.63s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-300790
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-300790: exit status 85 (55.916862ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-300790 | jenkins | v1.33.1 | 15 Aug 24 00:05 UTC |          |
	|         | -p download-only-300790        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 00:05:08
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 00:05:08.486322   32117 out.go:291] Setting OutFile to fd 1 ...
	I0815 00:05:08.486554   32117 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:05:08.486562   32117 out.go:304] Setting ErrFile to fd 2...
	I0815 00:05:08.486566   32117 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:05:08.486740   32117 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-25263/.minikube/bin
	W0815 00:05:08.486846   32117 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19443-25263/.minikube/config/config.json: open /home/jenkins/minikube-integration/19443-25263/.minikube/config/config.json: no such file or directory
	I0815 00:05:08.487386   32117 out.go:298] Setting JSON to true
	I0815 00:05:08.488200   32117 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6445,"bootTime":1723673863,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 00:05:08.488256   32117 start.go:139] virtualization: kvm guest
	I0815 00:05:08.490697   32117 out.go:97] [download-only-300790] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 00:05:08.490800   32117 notify.go:220] Checking for updates...
	W0815 00:05:08.490856   32117 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19443-25263/.minikube/cache/preloaded-tarball: no such file or directory
	I0815 00:05:08.492166   32117 out.go:169] MINIKUBE_LOCATION=19443
	I0815 00:05:08.493372   32117 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 00:05:08.494597   32117 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19443-25263/kubeconfig
	I0815 00:05:08.495753   32117 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-25263/.minikube
	I0815 00:05:08.496949   32117 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0815 00:05:08.499295   32117 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0815 00:05:08.499465   32117 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 00:05:08.521087   32117 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0815 00:05:08.521175   32117 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 00:05:08.850134   32117 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-15 00:05:08.841434185 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0815 00:05:08.850235   32117 docker.go:307] overlay module found
	I0815 00:05:08.852136   32117 out.go:97] Using the docker driver based on user configuration
	I0815 00:05:08.852157   32117 start.go:297] selected driver: docker
	I0815 00:05:08.852170   32117 start.go:901] validating driver "docker" against <nil>
	I0815 00:05:08.852248   32117 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 00:05:08.898578   32117 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-15 00:05:08.890455304 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0815 00:05:08.898790   32117 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 00:05:08.899305   32117 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0815 00:05:08.899475   32117 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0815 00:05:08.901229   32117 out.go:169] Using Docker driver with root privileges
	I0815 00:05:08.902569   32117 cni.go:84] Creating CNI manager for ""
	I0815 00:05:08.902594   32117 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0815 00:05:08.902607   32117 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0815 00:05:08.902684   32117 start.go:340] cluster config:
	{Name:download-only-300790 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-300790 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 00:05:08.904257   32117 out.go:97] Starting "download-only-300790" primary control-plane node in "download-only-300790" cluster
	I0815 00:05:08.904279   32117 cache.go:121] Beginning downloading kic base image for docker with crio
	I0815 00:05:08.905551   32117 out.go:97] Pulling base image v0.0.44-1723650208-19443 ...
	I0815 00:05:08.905609   32117 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0815 00:05:08.905730   32117 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local docker daemon
	I0815 00:05:08.921054   32117 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 to local cache
	I0815 00:05:08.921233   32117 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local cache directory
	I0815 00:05:08.921312   32117 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 to local cache
	I0815 00:05:08.930882   32117 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0815 00:05:08.930898   32117 cache.go:56] Caching tarball of preloaded images
	I0815 00:05:08.930993   32117 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0815 00:05:08.932906   32117 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0815 00:05:08.932923   32117 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0815 00:05:08.957029   32117 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19443-25263/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0815 00:05:12.979976   32117 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0815 00:05:12.980063   32117 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19443-25263/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-300790 host does not exist
	  To start a cluster, run: "minikube start -p download-only-300790"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-300790
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (4.8s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-210576 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-210576 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.796042748s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (4.80s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-210576
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-210576: exit status 85 (55.411573ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-300790 | jenkins | v1.33.1 | 15 Aug 24 00:05 UTC |                     |
	|         | -p download-only-300790        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 15 Aug 24 00:05 UTC | 15 Aug 24 00:05 UTC |
	| delete  | -p download-only-300790        | download-only-300790 | jenkins | v1.33.1 | 15 Aug 24 00:05 UTC | 15 Aug 24 00:05 UTC |
	| start   | -o=json --download-only        | download-only-210576 | jenkins | v1.33.1 | 15 Aug 24 00:05 UTC |                     |
	|         | -p download-only-210576        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 00:05:16
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 00:05:16.473223   32465 out.go:291] Setting OutFile to fd 1 ...
	I0815 00:05:16.473570   32465 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:05:16.473592   32465 out.go:304] Setting ErrFile to fd 2...
	I0815 00:05:16.473600   32465 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:05:16.474026   32465 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-25263/.minikube/bin
	I0815 00:05:16.474960   32465 out.go:298] Setting JSON to true
	I0815 00:05:16.475749   32465 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6453,"bootTime":1723673863,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 00:05:16.475808   32465 start.go:139] virtualization: kvm guest
	I0815 00:05:16.477736   32465 out.go:97] [download-only-210576] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 00:05:16.477870   32465 notify.go:220] Checking for updates...
	I0815 00:05:16.478935   32465 out.go:169] MINIKUBE_LOCATION=19443
	I0815 00:05:16.480181   32465 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 00:05:16.481396   32465 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19443-25263/kubeconfig
	I0815 00:05:16.482614   32465 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-25263/.minikube
	I0815 00:05:16.483806   32465 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0815 00:05:16.485871   32465 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0815 00:05:16.486081   32465 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 00:05:16.506077   32465 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0815 00:05:16.506163   32465 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 00:05:16.553725   32465 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:46 SystemTime:2024-08-15 00:05:16.544951272 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0815 00:05:16.553855   32465 docker.go:307] overlay module found
	I0815 00:05:16.555424   32465 out.go:97] Using the docker driver based on user configuration
	I0815 00:05:16.555446   32465 start.go:297] selected driver: docker
	I0815 00:05:16.555457   32465 start.go:901] validating driver "docker" against <nil>
	I0815 00:05:16.555535   32465 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 00:05:16.602089   32465 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:46 SystemTime:2024-08-15 00:05:16.593519216 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0815 00:05:16.602243   32465 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 00:05:16.602697   32465 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0815 00:05:16.602830   32465 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0815 00:05:16.604860   32465 out.go:169] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-210576 host does not exist
	  To start a cluster, run: "minikube start -p download-only-210576"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-210576
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.03s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-237330 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-237330" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-237330
--- PASS: TestDownloadOnlyKic (1.03s)

                                                
                                    
x
+
TestBinaryMirror (0.72s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-616195 --alsologtostderr --binary-mirror http://127.0.0.1:46729 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-616195" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-616195
--- PASS: TestBinaryMirror (0.72s)

                                                
                                    
x
+
TestOffline (56.39s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-521371 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-521371 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (53.957474365s)
helpers_test.go:175: Cleaning up "offline-crio-521371" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-521371
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-521371: (2.424188756s)
--- PASS: TestOffline (56.39s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-877132
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-877132: exit status 85 (51.444172ms)

                                                
                                                
-- stdout --
	* Profile "addons-877132" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-877132"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-877132
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-877132: exit status 85 (51.302018ms)

                                                
                                                
-- stdout --
	* Profile "addons-877132" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-877132"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (171.08s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-877132 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-877132 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m51.081832995s)
--- PASS: TestAddons/Setup (171.08s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-877132 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-877132 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 2.411197ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-r4n2w" [6ba345fc-6428-44c4-a39f-a525f747a85d] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.002282064s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-9j2gn" [dafac940-abdc-432d-9a46-cf80da8907aa] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00329618s
addons_test.go:342: (dbg) Run:  kubectl --context addons-877132 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-877132 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-877132 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.500717595s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-877132 ip
2024/08/15 00:08:44 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-877132 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.42s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.8s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-chs9b" [b44d33ea-8491-422f-bec8-0817f4cfde47] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003839866s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-877132
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-877132: (5.791033703s)
--- PASS: TestAddons/parallel/InspektorGadget (10.80s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (8.58s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 1.960907ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-bthmf" [62d076df-bde8-40cf-ab28-b8fba5fea0d6] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.002929148s
addons_test.go:475: (dbg) Run:  kubectl --context addons-877132 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-877132 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (3.099563395s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-877132 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (8.58s)

                                                
                                    
x
+
TestAddons/parallel/CSI (60.2s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 25.90681ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-877132 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877132 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877132 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877132 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877132 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877132 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877132 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877132 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877132 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877132 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877132 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877132 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877132 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877132 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877132 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-877132 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [57940da8-6eb1-4d30-b802-ad633c8c11c6] Pending
helpers_test.go:344: "task-pv-pod" [57940da8-6eb1-4d30-b802-ad633c8c11c6] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [57940da8-6eb1-4d30-b802-ad633c8c11c6] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.003769383s
addons_test.go:590: (dbg) Run:  kubectl --context addons-877132 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-877132 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-877132 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-877132 delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context addons-877132 delete pod task-pv-pod: (1.147432791s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-877132 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-877132 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877132 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877132 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877132 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877132 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877132 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877132 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877132 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877132 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877132 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877132 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877132 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877132 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877132 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877132 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877132 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877132 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877132 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877132 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877132 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-877132 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [0d2c7e42-c790-40ae-983c-c0b3791b3642] Pending
helpers_test.go:344: "task-pv-pod-restore" [0d2c7e42-c790-40ae-983c-c0b3791b3642] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 6.003668568s
addons_test.go:632: (dbg) Run:  kubectl --context addons-877132 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-877132 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-877132 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-877132 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-877132 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.496106734s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-877132 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (60.20s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.4s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-877132 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-jxrlh" [42462459-5ebf-46bc-91b8-c9c2dec215e4] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-jxrlh" [42462459-5ebf-46bc-91b8-c9c2dec215e4] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-jxrlh" [42462459-5ebf-46bc-91b8-c9c2dec215e4] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.00299335s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-877132 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-877132 addons disable headlamp --alsologtostderr -v=1: (5.69179839s)
--- PASS: TestAddons/parallel/Headlamp (15.40s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.61s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-c4bc9b5f8-4qmnn" [dbc6df6a-421b-4d41-aa54-257810454d50] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004336082s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-877132
--- PASS: TestAddons/parallel/CloudSpanner (5.61s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.03s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-877132 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-877132 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877132 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877132 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877132 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877132 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877132 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [7412063c-14dc-481f-956f-9291ff5595b9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [7412063c-14dc-481f-956f-9291ff5595b9] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [7412063c-14dc-481f-956f-9291ff5595b9] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003793672s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-877132 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-877132 ssh "cat /opt/local-path-provisioner/pvc-56d7ae18-0d09-496f-9576-9fd79c71aa37_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-877132 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-877132 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-877132 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (8.03s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.45s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-6d62n" [0b96b707-d892-4a7c-9728-5d4ddf5b5465] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003535523s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-877132
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.45s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.61s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-h54wr" [0b7e3dee-a2f4-4736-9ecd-61fdb6f728fd] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003306901s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-877132 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-877132 addons disable yakd --alsologtostderr -v=1: (5.603133995s)
--- PASS: TestAddons/parallel/Yakd (11.61s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.01s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-877132
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-877132: (11.789416366s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-877132
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-877132
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-877132
--- PASS: TestAddons/StoppedEnableDisable (12.01s)

                                                
                                    
x
+
TestCertOptions (29.31s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-421827 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-421827 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (25.468889279s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-421827 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-421827 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-421827 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-421827" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-421827
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-421827: (3.170000576s)
--- PASS: TestCertOptions (29.31s)

                                                
                                    
x
+
TestCertExpiration (218.99s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-808958 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-808958 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (23.569085957s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-808958 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-808958 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (13.120570656s)
helpers_test.go:175: Cleaning up "cert-expiration-808958" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-808958
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-808958: (2.294391415s)
--- PASS: TestCertExpiration (218.99s)

                                                
                                    
x
+
TestForceSystemdFlag (24.83s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-456008 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-456008 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (22.181536743s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-456008 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-456008" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-456008
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-456008: (2.389854352s)
--- PASS: TestForceSystemdFlag (24.83s)

                                                
                                    
x
+
TestForceSystemdEnv (41.36s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-602834 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-602834 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (38.835502127s)
helpers_test.go:175: Cleaning up "force-systemd-env-602834" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-602834
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-602834: (2.523012729s)
--- PASS: TestForceSystemdEnv (41.36s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.19s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.19s)

                                                
                                    
x
+
TestErrorSpam/setup (23.15s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-087171 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-087171 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-087171 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-087171 --driver=docker  --container-runtime=crio: (23.145762308s)
--- PASS: TestErrorSpam/setup (23.15s)

                                                
                                    
x
+
TestErrorSpam/start (0.53s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-087171 --log_dir /tmp/nospam-087171 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-087171 --log_dir /tmp/nospam-087171 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-087171 --log_dir /tmp/nospam-087171 start --dry-run
--- PASS: TestErrorSpam/start (0.53s)

                                                
                                    
x
+
TestErrorSpam/status (0.82s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-087171 --log_dir /tmp/nospam-087171 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-087171 --log_dir /tmp/nospam-087171 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-087171 --log_dir /tmp/nospam-087171 status
--- PASS: TestErrorSpam/status (0.82s)

                                                
                                    
x
+
TestErrorSpam/pause (1.43s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-087171 --log_dir /tmp/nospam-087171 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-087171 --log_dir /tmp/nospam-087171 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-087171 --log_dir /tmp/nospam-087171 pause
--- PASS: TestErrorSpam/pause (1.43s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.57s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-087171 --log_dir /tmp/nospam-087171 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-087171 --log_dir /tmp/nospam-087171 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-087171 --log_dir /tmp/nospam-087171 unpause
--- PASS: TestErrorSpam/unpause (1.57s)

                                                
                                    
x
+
TestErrorSpam/stop (1.32s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-087171 --log_dir /tmp/nospam-087171 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-087171 --log_dir /tmp/nospam-087171 stop: (1.159555188s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-087171 --log_dir /tmp/nospam-087171 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-087171 --log_dir /tmp/nospam-087171 stop
--- PASS: TestErrorSpam/stop (1.32s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19443-25263/.minikube/files/etc/test/nested/copy/32105/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (39.01s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-906828 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-906828 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (39.007797942s)
--- PASS: TestFunctional/serial/StartWithProxy (39.01s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (40.8s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-906828 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-906828 --alsologtostderr -v=8: (40.800254344s)
functional_test.go:663: soft start took 40.800991938s for "functional-906828" cluster.
--- PASS: TestFunctional/serial/SoftStart (40.80s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-906828 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.75s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.75s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.92s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-906828 /tmp/TestFunctionalserialCacheCmdcacheadd_local232423518/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 cache add minikube-local-cache-test:functional-906828
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 cache delete minikube-local-cache-test:functional-906828
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-906828
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.92s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.58s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-906828 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (251.013234ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.58s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 kubectl -- --context functional-906828 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-906828 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.43s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-906828 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-906828 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.428016044s)
functional_test.go:761: restart took 33.428130006s for "functional-906828" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (33.43s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-906828 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.23s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-906828 logs: (1.233090701s)
--- PASS: TestFunctional/serial/LogsCmd (1.23s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 logs --file /tmp/TestFunctionalserialLogsFileCmd38789021/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-906828 logs --file /tmp/TestFunctionalserialLogsFileCmd38789021/001/logs.txt: (1.251982632s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.25s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.77s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-906828 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-906828
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-906828: exit status 115 (299.63413ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30391 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-906828 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.77s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-906828 config get cpus: exit status 14 (79.664487ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-906828 config get cpus: exit status 14 (51.889886ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-906828 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-906828 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 70230: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.64s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-906828 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-906828 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (200.739149ms)

                                                
                                                
-- stdout --
	* [functional-906828] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19443
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19443-25263/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-25263/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 00:17:06.322854   69296 out.go:291] Setting OutFile to fd 1 ...
	I0815 00:17:06.322952   69296 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:17:06.322963   69296 out.go:304] Setting ErrFile to fd 2...
	I0815 00:17:06.322969   69296 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:17:06.323175   69296 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-25263/.minikube/bin
	I0815 00:17:06.323666   69296 out.go:298] Setting JSON to false
	I0815 00:17:06.324615   69296 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":7163,"bootTime":1723673863,"procs":242,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 00:17:06.324674   69296 start.go:139] virtualization: kvm guest
	I0815 00:17:06.326717   69296 out.go:177] * [functional-906828] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 00:17:06.327863   69296 notify.go:220] Checking for updates...
	I0815 00:17:06.327896   69296 out.go:177]   - MINIKUBE_LOCATION=19443
	I0815 00:17:06.329106   69296 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 00:17:06.330294   69296 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19443-25263/kubeconfig
	I0815 00:17:06.331489   69296 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-25263/.minikube
	I0815 00:17:06.332581   69296 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 00:17:06.333646   69296 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 00:17:06.335170   69296 config.go:182] Loaded profile config "functional-906828": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:17:06.335731   69296 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 00:17:06.360579   69296 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0815 00:17:06.360721   69296 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 00:17:06.430104   69296 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-15 00:17:06.41858453 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0815 00:17:06.430232   69296 docker.go:307] overlay module found
	I0815 00:17:06.432213   69296 out.go:177] * Using the docker driver based on existing profile
	I0815 00:17:06.433229   69296 start.go:297] selected driver: docker
	I0815 00:17:06.433247   69296 start.go:901] validating driver "docker" against &{Name:functional-906828 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-906828 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 00:17:06.433363   69296 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 00:17:06.435680   69296 out.go:177] 
	W0815 00:17:06.437277   69296 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0815 00:17:06.438723   69296 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-906828 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-906828 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-906828 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (163.473023ms)

                                                
                                                
-- stdout --
	* [functional-906828] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19443
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19443-25263/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-25263/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 00:17:06.128629   69199 out.go:291] Setting OutFile to fd 1 ...
	I0815 00:17:06.128738   69199 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:17:06.128748   69199 out.go:304] Setting ErrFile to fd 2...
	I0815 00:17:06.128753   69199 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:17:06.129033   69199 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-25263/.minikube/bin
	I0815 00:17:06.129538   69199 out.go:298] Setting JSON to false
	I0815 00:17:06.130507   69199 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":7163,"bootTime":1723673863,"procs":241,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 00:17:06.130567   69199 start.go:139] virtualization: kvm guest
	I0815 00:17:06.132830   69199 out.go:177] * [functional-906828] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0815 00:17:06.134132   69199 out.go:177]   - MINIKUBE_LOCATION=19443
	I0815 00:17:06.134212   69199 notify.go:220] Checking for updates...
	I0815 00:17:06.136141   69199 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 00:17:06.137278   69199 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19443-25263/kubeconfig
	I0815 00:17:06.138364   69199 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-25263/.minikube
	I0815 00:17:06.139440   69199 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 00:17:06.140635   69199 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 00:17:06.142188   69199 config.go:182] Loaded profile config "functional-906828": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:17:06.142868   69199 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 00:17:06.168890   69199 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0815 00:17:06.169032   69199 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 00:17:06.230130   69199 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-15 00:17:06.218780453 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0815 00:17:06.230235   69199 docker.go:307] overlay module found
	I0815 00:17:06.232944   69199 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0815 00:17:06.233989   69199 start.go:297] selected driver: docker
	I0815 00:17:06.234003   69199 start.go:901] validating driver "docker" against &{Name:functional-906828 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-906828 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 00:17:06.234112   69199 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 00:17:06.236136   69199 out.go:177] 
	W0815 00:17:06.237396   69199 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0815 00:17:06.238637   69199 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-906828 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-906828 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-5dfz6" [9b6caf72-78d7-40eb-8754-409596b3dd8a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-5dfz6" [9b6caf72-78d7-40eb-8754-409596b3dd8a] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.084514845s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31443
functional_test.go:1675: http://192.168.49.2:31443: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-5dfz6

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31443
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.90s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (29.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [b45cc7eb-434e-4f8f-9bd6-36744fbb5241] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004399878s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-906828 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-906828 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-906828 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-906828 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [14f59d7e-9e62-491c-af2a-c4d56fdfe4d1] Pending
helpers_test.go:344: "sp-pod" [14f59d7e-9e62-491c-af2a-c4d56fdfe4d1] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [14f59d7e-9e62-491c-af2a-c4d56fdfe4d1] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 17.003991158s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-906828 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-906828 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-906828 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [041588c2-ad21-46b2-852f-2b90b038ea16] Pending
helpers_test.go:344: "sp-pod" [041588c2-ad21-46b2-852f-2b90b038ea16] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003528668s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-906828 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (29.53s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 ssh -n functional-906828 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 cp functional-906828:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2179196560/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 ssh -n functional-906828 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 ssh -n functional-906828 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (18.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-906828 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-n8lm4" [c29ce561-65e8-401c-8ec6-56b13509614c] Pending
helpers_test.go:344: "mysql-6cdb49bbb-n8lm4" [c29ce561-65e8-401c-8ec6-56b13509614c] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-n8lm4" [c29ce561-65e8-401c-8ec6-56b13509614c] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 15.00385746s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-906828 exec mysql-6cdb49bbb-n8lm4 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-906828 exec mysql-6cdb49bbb-n8lm4 -- mysql -ppassword -e "show databases;": exit status 1 (101.153689ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-906828 exec mysql-6cdb49bbb-n8lm4 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-906828 exec mysql-6cdb49bbb-n8lm4 -- mysql -ppassword -e "show databases;": exit status 1 (95.911227ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-906828 exec mysql-6cdb49bbb-n8lm4 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (18.78s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/32105/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 ssh "sudo cat /etc/test/nested/copy/32105/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/32105.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 ssh "sudo cat /etc/ssl/certs/32105.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/32105.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 ssh "sudo cat /usr/share/ca-certificates/32105.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/321052.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 ssh "sudo cat /etc/ssl/certs/321052.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/321052.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 ssh "sudo cat /usr/share/ca-certificates/321052.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.66s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-906828 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-906828 ssh "sudo systemctl is-active docker": exit status 1 (275.154238ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-906828 ssh "sudo systemctl is-active containerd": exit status 1 (258.680685ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-906828 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-906828 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-whpd7" [e0939efb-395f-403b-b52f-89e18fe6d941] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-whpd7" [e0939efb-395f-403b-b52f-89e18fe6d941] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.007366709s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-906828 /tmp/TestFunctionalparallelMountCmdany-port232606106/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1723681024526905726" to /tmp/TestFunctionalparallelMountCmdany-port232606106/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1723681024526905726" to /tmp/TestFunctionalparallelMountCmdany-port232606106/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1723681024526905726" to /tmp/TestFunctionalparallelMountCmdany-port232606106/001/test-1723681024526905726
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-906828 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (289.742105ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 15 00:17 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 15 00:17 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 15 00:17 test-1723681024526905726
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 ssh cat /mount-9p/test-1723681024526905726
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-906828 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [734cddb1-7876-429b-8413-7716243b7645] Pending
helpers_test.go:344: "busybox-mount" [734cddb1-7876-429b-8413-7716243b7645] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [734cddb1-7876-429b-8413-7716243b7645] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [734cddb1-7876-429b-8413-7716243b7645] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003543862s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-906828 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-906828 /tmp/TestFunctionalparallelMountCmdany-port232606106/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.16s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "323.545999ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "51.540148ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "359.313752ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "50.658595ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-906828 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-906828
localhost/kicbase/echo-server:functional-906828
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240813-c6f155d6
docker.io/kindest/kindnetd:v20240730-75a5af0c
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-906828 image ls --format short --alsologtostderr:
I0815 00:17:24.998625   75400 out.go:291] Setting OutFile to fd 1 ...
I0815 00:17:24.998744   75400 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0815 00:17:24.998754   75400 out.go:304] Setting ErrFile to fd 2...
I0815 00:17:24.998761   75400 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0815 00:17:24.998971   75400 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-25263/.minikube/bin
I0815 00:17:24.999532   75400 config.go:182] Loaded profile config "functional-906828": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0815 00:17:24.999645   75400 config.go:182] Loaded profile config "functional-906828": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0815 00:17:25.000055   75400 cli_runner.go:164] Run: docker container inspect functional-906828 --format={{.State.Status}}
I0815 00:17:25.019964   75400 ssh_runner.go:195] Run: systemctl --version
I0815 00:17:25.020021   75400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-906828
I0815 00:17:25.036273   75400 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19443-25263/.minikube/machines/functional-906828/id_rsa Username:docker}
I0815 00:17:25.127065   75400 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-906828 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/library/nginx                 | alpine             | 1ae23480369fa | 45.1MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-scheduler          | v1.31.0            | 1766f54c897f0 | 68.4MB |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| localhost/minikube-local-cache-test     | functional-906828  | e190aeed98b89 | 3.33kB |
| localhost/my-image                      | functional-906828  | 06d53a153cc62 | 1.47MB |
| registry.k8s.io/kube-proxy              | v1.31.0            | ad83b2ca7b09e | 92.7MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 12968670680f4 | 87.2MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| registry.k8s.io/kube-controller-manager | v1.31.0            | 045733566833c | 89.4MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-apiserver          | v1.31.0            | 604f5db92eaa8 | 95.2MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/kindest/kindnetd              | v20240730-75a5af0c | 917d7814b9b5b | 87.2MB |
| localhost/kicbase/echo-server           | functional-906828  | 9056ab77afb8e | 4.94MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-906828 image ls --format table --alsologtostderr:
I0815 00:17:28.745331   76209 out.go:291] Setting OutFile to fd 1 ...
I0815 00:17:28.745573   76209 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0815 00:17:28.745581   76209 out.go:304] Setting ErrFile to fd 2...
I0815 00:17:28.745586   76209 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0815 00:17:28.745762   76209 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-25263/.minikube/bin
I0815 00:17:28.746251   76209 config.go:182] Loaded profile config "functional-906828": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0815 00:17:28.746342   76209 config.go:182] Loaded profile config "functional-906828": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0815 00:17:28.746680   76209 cli_runner.go:164] Run: docker container inspect functional-906828 --format={{.State.Status}}
I0815 00:17:28.771695   76209 ssh_runner.go:195] Run: systemctl --version
I0815 00:17:28.771738   76209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-906828
I0815 00:17:28.795454   76209 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19443-25263/.minikube/machines/functional-906828/id_rsa Username:docker}
I0815 00:17:28.959303   76209 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-906828 image ls --format json --alsologtostderr:
[{"id":"1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95","repoDigests":["docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9","docker.io/library/nginx@sha256:a377278b7dde3a8012b25d141d025a88dbf9f5ed13c5cdf21ee241e7ec07ab57"],"repoTags":["docker.io/library/nginx:alpine"],"size":"45068794"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb0
27320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d","registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"89437512"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0d
a7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f","repoDigests":["docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"87190579"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-
906828"],"size":"4943877"},{"id":"604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3","repoDigests":["registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf","registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"95233506"},{"id":"1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94","repoDigests":["registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a","registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"68420936"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f
505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557","repoDigests":["docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3","docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"],"repoTags":["docker.io/kindest/kindnetd:v20240730-75a5af0c"],"size":"87165492"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox
@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"e190aeed98b895566487d208d3ba81521ec4621ccd429cf3a45e644d241ddcde","repoDigests":["localhost/minikube-local-cache-test@sha256:055dca3fa3bb7536e8c136b53c18e60fc325558aaa49ec26e935d6b2d26c9b82"],"repoTags":["localhost/minikube-local-cache-test:functional-906828"],"size":"3330"},{"id":"06d53a153cc62c3b3d860bf67b4177e1237dcd0d5db1e57c3538593865b9ee92","repoDigests":["localhost/my-image@sha256:dd873b1c599fdd2514d3aa413bd38082b383770ab7d7ddfb17b43e19317786f3"],"repoTags":["localhost/my-image:functional-906828"],"size":"1468194"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests
":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"cd2c13868653c30d00c5f45b297b6c6abb8b3d02e269a26343d8332be4ae5fa1","repoDigests":["docker.io/library/59af1b3ca5be7c486b821474e679baff80a9e295f24e1dfae07e3baabaa01374-tmp@sha256:3a02aa2e8f343e73f6a9be3bcb62dec6941c94815f0e31b1e7a2ade18084ff73"],"repoTags":[],"size":"1465612"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100
d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494","repoDigests":["registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf","registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"92728217"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-906828 image ls --format json --alsologtostderr:
I0815 00:17:28.433747   76137 out.go:291] Setting OutFile to fd 1 ...
I0815 00:17:28.434017   76137 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0815 00:17:28.434026   76137 out.go:304] Setting ErrFile to fd 2...
I0815 00:17:28.434030   76137 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0815 00:17:28.434222   76137 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-25263/.minikube/bin
I0815 00:17:28.434789   76137 config.go:182] Loaded profile config "functional-906828": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0815 00:17:28.434889   76137 config.go:182] Loaded profile config "functional-906828": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0815 00:17:28.435220   76137 cli_runner.go:164] Run: docker container inspect functional-906828 --format={{.State.Status}}
I0815 00:17:28.451306   76137 ssh_runner.go:195] Run: systemctl --version
I0815 00:17:28.451351   76137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-906828
I0815 00:17:28.474619   76137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19443-25263/.minikube/machines/functional-906828/id_rsa Username:docker}
I0815 00:17:28.610173   76137 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-906828 image ls --format yaml --alsologtostderr:
- id: 917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557
repoDigests:
- docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3
- docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a
repoTags:
- docker.io/kindest/kindnetd:v20240730-75a5af0c
size: "87165492"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-906828
size: "4943877"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494
repoDigests:
- registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf
- registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "92728217"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f
repoDigests:
- docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "87190579"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: e190aeed98b895566487d208d3ba81521ec4621ccd429cf3a45e644d241ddcde
repoDigests:
- localhost/minikube-local-cache-test@sha256:055dca3fa3bb7536e8c136b53c18e60fc325558aaa49ec26e935d6b2d26c9b82
repoTags:
- localhost/minikube-local-cache-test:functional-906828
size: "3330"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf
- registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "95233506"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d
- registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "89437512"
- id: 1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a
- registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "68420936"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-906828 image ls --format yaml --alsologtostderr:
I0815 00:17:25.227361   75486 out.go:291] Setting OutFile to fd 1 ...
I0815 00:17:25.227777   75486 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0815 00:17:25.227795   75486 out.go:304] Setting ErrFile to fd 2...
I0815 00:17:25.227803   75486 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0815 00:17:25.228227   75486 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-25263/.minikube/bin
I0815 00:17:25.228967   75486 config.go:182] Loaded profile config "functional-906828": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0815 00:17:25.229061   75486 config.go:182] Loaded profile config "functional-906828": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0815 00:17:25.229428   75486 cli_runner.go:164] Run: docker container inspect functional-906828 --format={{.State.Status}}
I0815 00:17:25.245238   75486 ssh_runner.go:195] Run: systemctl --version
I0815 00:17:25.245277   75486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-906828
I0815 00:17:25.263172   75486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19443-25263/.minikube/machines/functional-906828/id_rsa Username:docker}
I0815 00:17:25.458793   75486 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-906828 ssh pgrep buildkitd: exit status 1 (374.659741ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 image build -t localhost/my-image:functional-906828 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-906828 image build -t localhost/my-image:functional-906828 testdata/build --alsologtostderr: (2.122657002s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-906828 image build -t localhost/my-image:functional-906828 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> cd2c1386865
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-906828
--> 06d53a153cc
Successfully tagged localhost/my-image:functional-906828
06d53a153cc62c3b3d860bf67b4177e1237dcd0d5db1e57c3538593865b9ee92
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-906828 image build -t localhost/my-image:functional-906828 testdata/build --alsologtostderr:
I0815 00:17:26.012338   75682 out.go:291] Setting OutFile to fd 1 ...
I0815 00:17:26.012635   75682 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0815 00:17:26.012645   75682 out.go:304] Setting ErrFile to fd 2...
I0815 00:17:26.012649   75682 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0815 00:17:26.012890   75682 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-25263/.minikube/bin
I0815 00:17:26.013498   75682 config.go:182] Loaded profile config "functional-906828": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0815 00:17:26.015422   75682 config.go:182] Loaded profile config "functional-906828": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0815 00:17:26.015838   75682 cli_runner.go:164] Run: docker container inspect functional-906828 --format={{.State.Status}}
I0815 00:17:26.033046   75682 ssh_runner.go:195] Run: systemctl --version
I0815 00:17:26.033099   75682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-906828
I0815 00:17:26.048863   75682 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19443-25263/.minikube/machines/functional-906828/id_rsa Username:docker}
I0815 00:17:26.149657   75682 build_images.go:161] Building image from path: /tmp/build.2845312631.tar
I0815 00:17:26.149723   75682 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0815 00:17:26.162886   75682 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2845312631.tar
I0815 00:17:26.165926   75682 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2845312631.tar: stat -c "%s %y" /var/lib/minikube/build/build.2845312631.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2845312631.tar': No such file or directory
I0815 00:17:26.165951   75682 ssh_runner.go:362] scp /tmp/build.2845312631.tar --> /var/lib/minikube/build/build.2845312631.tar (3072 bytes)
I0815 00:17:26.188174   75682 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2845312631
I0815 00:17:26.196752   75682 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2845312631 -xf /var/lib/minikube/build/build.2845312631.tar
I0815 00:17:26.204389   75682 crio.go:315] Building image: /var/lib/minikube/build/build.2845312631
I0815 00:17:26.204462   75682 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-906828 /var/lib/minikube/build/build.2845312631 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0815 00:17:28.060906   75682 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-906828 /var/lib/minikube/build/build.2845312631 --cgroup-manager=cgroupfs: (1.856411286s)
I0815 00:17:28.060972   75682 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2845312631
I0815 00:17:28.069637   75682 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2845312631.tar
I0815 00:17:28.077985   75682 build_images.go:217] Built localhost/my-image:functional-906828 from /tmp/build.2845312631.tar
I0815 00:17:28.078013   75682 build_images.go:133] succeeded building to: functional-906828
I0815 00:17:28.078020   75682 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-906828
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 image load --daemon kicbase/echo-server:functional-906828 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-906828 image load --daemon kicbase/echo-server:functional-906828 --alsologtostderr: (1.204221042s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 image load --daemon kicbase/echo-server:functional-906828 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-906828
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 image load --daemon kicbase/echo-server:functional-906828 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-amd64 -p functional-906828 image load --daemon kicbase/echo-server:functional-906828 --alsologtostderr: (2.00903355s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-906828 /tmp/TestFunctionalparallelMountCmdspecific-port2242798884/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-906828 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (280.397133ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-906828 /tmp/TestFunctionalparallelMountCmdspecific-port2242798884/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-906828 ssh "sudo umount -f /mount-9p": exit status 1 (269.731245ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-906828 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-906828 /tmp/TestFunctionalparallelMountCmdspecific-port2242798884/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 image save kicbase/echo-server:functional-906828 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 image rm kicbase/echo-server:functional-906828 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-906828 /tmp/TestFunctionalparallelMountCmdVerifyCleanup895252188/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-906828 /tmp/TestFunctionalparallelMountCmdVerifyCleanup895252188/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-906828 /tmp/TestFunctionalparallelMountCmdVerifyCleanup895252188/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-906828 ssh "findmnt -T" /mount1: exit status 1 (379.877071ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-906828 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-906828 /tmp/TestFunctionalparallelMountCmdVerifyCleanup895252188/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-906828 /tmp/TestFunctionalparallelMountCmdVerifyCleanup895252188/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-906828 /tmp/TestFunctionalparallelMountCmdVerifyCleanup895252188/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 service list -o json
2024/08/15 00:17:14 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1494: Took "506.587203ms" to run "out/minikube-linux-amd64 -p functional-906828 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:32088
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-906828
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 image save --daemon kicbase/echo-server:functional-906828 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-906828
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-906828 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:32088
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-906828 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-906828 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-906828 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-906828 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 74210: os: process already finished
helpers_test.go:508: unable to kill pid 73882: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-906828 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (17.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-906828 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [a187e43d-a318-470f-8b42-50f40ffd730b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [a187e43d-a318-470f-8b42-50f40ffd730b] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 17.003853715s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (17.28s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-906828 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.100.146.2 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-906828 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-906828
--- PASS: TestFunctional/delete_echo-server_images (0.06s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-906828
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-906828
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (98.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-808459 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0815 00:18:15.004330   32105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:18:15.011000   32105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:18:15.022321   32105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:18:15.043702   32105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:18:15.085125   32105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:18:15.166611   32105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:18:15.328159   32105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:18:15.649609   32105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:18:16.291594   32105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:18:17.573296   32105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:18:20.135728   32105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:18:25.257097   32105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:18:35.498508   32105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:18:55.980204   32105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-808459 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m37.901947219s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (98.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (3.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-808459 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-808459 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-808459 -- rollout status deployment/busybox: (2.011305121s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-808459 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-808459 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-808459 -- exec busybox-7dff88458-hxm7s -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-808459 -- exec busybox-7dff88458-ksnlq -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-808459 -- exec busybox-7dff88458-v5nl4 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-808459 -- exec busybox-7dff88458-hxm7s -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-808459 -- exec busybox-7dff88458-ksnlq -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-808459 -- exec busybox-7dff88458-v5nl4 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-808459 -- exec busybox-7dff88458-hxm7s -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-808459 -- exec busybox-7dff88458-ksnlq -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-808459 -- exec busybox-7dff88458-v5nl4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (3.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-808459 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-808459 -- exec busybox-7dff88458-hxm7s -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-808459 -- exec busybox-7dff88458-hxm7s -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-808459 -- exec busybox-7dff88458-ksnlq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-808459 -- exec busybox-7dff88458-ksnlq -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-808459 -- exec busybox-7dff88458-v5nl4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-808459 -- exec busybox-7dff88458-v5nl4 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (33.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-808459 -v=7 --alsologtostderr
E0815 00:19:36.941924   32105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-808459 -v=7 --alsologtostderr: (32.305350496s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (33.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-808459 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (15.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 cp testdata/cp-test.txt ha-808459:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 ssh -n ha-808459 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 cp ha-808459:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2470154890/001/cp-test_ha-808459.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 ssh -n ha-808459 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 cp ha-808459:/home/docker/cp-test.txt ha-808459-m02:/home/docker/cp-test_ha-808459_ha-808459-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 ssh -n ha-808459 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 ssh -n ha-808459-m02 "sudo cat /home/docker/cp-test_ha-808459_ha-808459-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 cp ha-808459:/home/docker/cp-test.txt ha-808459-m03:/home/docker/cp-test_ha-808459_ha-808459-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 ssh -n ha-808459 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 ssh -n ha-808459-m03 "sudo cat /home/docker/cp-test_ha-808459_ha-808459-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 cp ha-808459:/home/docker/cp-test.txt ha-808459-m04:/home/docker/cp-test_ha-808459_ha-808459-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 ssh -n ha-808459 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 ssh -n ha-808459-m04 "sudo cat /home/docker/cp-test_ha-808459_ha-808459-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 cp testdata/cp-test.txt ha-808459-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 ssh -n ha-808459-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 cp ha-808459-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2470154890/001/cp-test_ha-808459-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 ssh -n ha-808459-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 cp ha-808459-m02:/home/docker/cp-test.txt ha-808459:/home/docker/cp-test_ha-808459-m02_ha-808459.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 ssh -n ha-808459-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 ssh -n ha-808459 "sudo cat /home/docker/cp-test_ha-808459-m02_ha-808459.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 cp ha-808459-m02:/home/docker/cp-test.txt ha-808459-m03:/home/docker/cp-test_ha-808459-m02_ha-808459-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 ssh -n ha-808459-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 ssh -n ha-808459-m03 "sudo cat /home/docker/cp-test_ha-808459-m02_ha-808459-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 cp ha-808459-m02:/home/docker/cp-test.txt ha-808459-m04:/home/docker/cp-test_ha-808459-m02_ha-808459-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 ssh -n ha-808459-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 ssh -n ha-808459-m04 "sudo cat /home/docker/cp-test_ha-808459-m02_ha-808459-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 cp testdata/cp-test.txt ha-808459-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 ssh -n ha-808459-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 cp ha-808459-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2470154890/001/cp-test_ha-808459-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 ssh -n ha-808459-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 cp ha-808459-m03:/home/docker/cp-test.txt ha-808459:/home/docker/cp-test_ha-808459-m03_ha-808459.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 ssh -n ha-808459-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 ssh -n ha-808459 "sudo cat /home/docker/cp-test_ha-808459-m03_ha-808459.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 cp ha-808459-m03:/home/docker/cp-test.txt ha-808459-m02:/home/docker/cp-test_ha-808459-m03_ha-808459-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 ssh -n ha-808459-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 ssh -n ha-808459-m02 "sudo cat /home/docker/cp-test_ha-808459-m03_ha-808459-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 cp ha-808459-m03:/home/docker/cp-test.txt ha-808459-m04:/home/docker/cp-test_ha-808459-m03_ha-808459-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 ssh -n ha-808459-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 ssh -n ha-808459-m04 "sudo cat /home/docker/cp-test_ha-808459-m03_ha-808459-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 cp testdata/cp-test.txt ha-808459-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 ssh -n ha-808459-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 cp ha-808459-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2470154890/001/cp-test_ha-808459-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 ssh -n ha-808459-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 cp ha-808459-m04:/home/docker/cp-test.txt ha-808459:/home/docker/cp-test_ha-808459-m04_ha-808459.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 ssh -n ha-808459-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 ssh -n ha-808459 "sudo cat /home/docker/cp-test_ha-808459-m04_ha-808459.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 cp ha-808459-m04:/home/docker/cp-test.txt ha-808459-m02:/home/docker/cp-test_ha-808459-m04_ha-808459-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 ssh -n ha-808459-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 ssh -n ha-808459-m02 "sudo cat /home/docker/cp-test_ha-808459-m04_ha-808459-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 cp ha-808459-m04:/home/docker/cp-test.txt ha-808459-m03:/home/docker/cp-test_ha-808459-m04_ha-808459-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 ssh -n ha-808459-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 ssh -n ha-808459-m03 "sudo cat /home/docker/cp-test_ha-808459-m04_ha-808459-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (15.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-808459 node stop m02 -v=7 --alsologtostderr: (11.785794899s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-808459 status -v=7 --alsologtostderr: exit status 7 (638.141075ms)

                                                
                                                
-- stdout --
	ha-808459
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-808459-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-808459-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-808459-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 00:20:33.099445   97803 out.go:291] Setting OutFile to fd 1 ...
	I0815 00:20:33.099560   97803 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:20:33.099567   97803 out.go:304] Setting ErrFile to fd 2...
	I0815 00:20:33.099571   97803 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:20:33.099767   97803 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-25263/.minikube/bin
	I0815 00:20:33.099932   97803 out.go:298] Setting JSON to false
	I0815 00:20:33.099954   97803 mustload.go:65] Loading cluster: ha-808459
	I0815 00:20:33.100078   97803 notify.go:220] Checking for updates...
	I0815 00:20:33.100317   97803 config.go:182] Loaded profile config "ha-808459": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:20:33.100335   97803 status.go:255] checking status of ha-808459 ...
	I0815 00:20:33.100749   97803 cli_runner.go:164] Run: docker container inspect ha-808459 --format={{.State.Status}}
	I0815 00:20:33.119018   97803 status.go:330] ha-808459 host status = "Running" (err=<nil>)
	I0815 00:20:33.119048   97803 host.go:66] Checking if "ha-808459" exists ...
	I0815 00:20:33.119282   97803 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-808459
	I0815 00:20:33.136103   97803 host.go:66] Checking if "ha-808459" exists ...
	I0815 00:20:33.136372   97803 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 00:20:33.136433   97803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-808459
	I0815 00:20:33.154370   97803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19443-25263/.minikube/machines/ha-808459/id_rsa Username:docker}
	I0815 00:20:33.246644   97803 ssh_runner.go:195] Run: systemctl --version
	I0815 00:20:33.250397   97803 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 00:20:33.260527   97803 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 00:20:33.307864   97803 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:72 SystemTime:2024-08-15 00:20:33.298409385 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0815 00:20:33.308441   97803 kubeconfig.go:125] found "ha-808459" server: "https://192.168.49.254:8443"
	I0815 00:20:33.308468   97803 api_server.go:166] Checking apiserver status ...
	I0815 00:20:33.308497   97803 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 00:20:33.318738   97803 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1485/cgroup
	I0815 00:20:33.327242   97803 api_server.go:182] apiserver freezer: "10:freezer:/docker/9ca3b4b2a06823c9e96b17f22f766e9b9cc91a95d36fa7b06d82664f90bb9792/crio/crio-717fa2dc98040f00b9973e55608c67c70dd01ba983538a5e2cc49994ab486a2f"
	I0815 00:20:33.327297   97803 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/9ca3b4b2a06823c9e96b17f22f766e9b9cc91a95d36fa7b06d82664f90bb9792/crio/crio-717fa2dc98040f00b9973e55608c67c70dd01ba983538a5e2cc49994ab486a2f/freezer.state
	I0815 00:20:33.335319   97803 api_server.go:204] freezer state: "THAWED"
	I0815 00:20:33.335353   97803 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0815 00:20:33.340398   97803 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0815 00:20:33.340419   97803 status.go:422] ha-808459 apiserver status = Running (err=<nil>)
	I0815 00:20:33.340429   97803 status.go:257] ha-808459 status: &{Name:ha-808459 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 00:20:33.340443   97803 status.go:255] checking status of ha-808459-m02 ...
	I0815 00:20:33.340707   97803 cli_runner.go:164] Run: docker container inspect ha-808459-m02 --format={{.State.Status}}
	I0815 00:20:33.357545   97803 status.go:330] ha-808459-m02 host status = "Stopped" (err=<nil>)
	I0815 00:20:33.357570   97803 status.go:343] host is not running, skipping remaining checks
	I0815 00:20:33.357578   97803 status.go:257] ha-808459-m02 status: &{Name:ha-808459-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 00:20:33.357598   97803 status.go:255] checking status of ha-808459-m03 ...
	I0815 00:20:33.357863   97803 cli_runner.go:164] Run: docker container inspect ha-808459-m03 --format={{.State.Status}}
	I0815 00:20:33.374344   97803 status.go:330] ha-808459-m03 host status = "Running" (err=<nil>)
	I0815 00:20:33.374369   97803 host.go:66] Checking if "ha-808459-m03" exists ...
	I0815 00:20:33.374646   97803 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-808459-m03
	I0815 00:20:33.391741   97803 host.go:66] Checking if "ha-808459-m03" exists ...
	I0815 00:20:33.392040   97803 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 00:20:33.392087   97803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-808459-m03
	I0815 00:20:33.409216   97803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19443-25263/.minikube/machines/ha-808459-m03/id_rsa Username:docker}
	I0815 00:20:33.502433   97803 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 00:20:33.512454   97803 kubeconfig.go:125] found "ha-808459" server: "https://192.168.49.254:8443"
	I0815 00:20:33.512479   97803 api_server.go:166] Checking apiserver status ...
	I0815 00:20:33.512505   97803 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 00:20:33.521715   97803 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1416/cgroup
	I0815 00:20:33.529736   97803 api_server.go:182] apiserver freezer: "10:freezer:/docker/931c993999c7823749734c09f358c0f70fd06514c0fd04ac1c8030a671b7742d/crio/crio-a4cef9cadbbe8b92719134f2a29fbf3b8b7c669330a74903bf596d44060cc06b"
	I0815 00:20:33.529805   97803 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/931c993999c7823749734c09f358c0f70fd06514c0fd04ac1c8030a671b7742d/crio/crio-a4cef9cadbbe8b92719134f2a29fbf3b8b7c669330a74903bf596d44060cc06b/freezer.state
	I0815 00:20:33.536977   97803 api_server.go:204] freezer state: "THAWED"
	I0815 00:20:33.537006   97803 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0815 00:20:33.540515   97803 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0815 00:20:33.540541   97803 status.go:422] ha-808459-m03 apiserver status = Running (err=<nil>)
	I0815 00:20:33.540556   97803 status.go:257] ha-808459-m03 status: &{Name:ha-808459-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 00:20:33.540576   97803 status.go:255] checking status of ha-808459-m04 ...
	I0815 00:20:33.540817   97803 cli_runner.go:164] Run: docker container inspect ha-808459-m04 --format={{.State.Status}}
	I0815 00:20:33.558110   97803 status.go:330] ha-808459-m04 host status = "Running" (err=<nil>)
	I0815 00:20:33.558144   97803 host.go:66] Checking if "ha-808459-m04" exists ...
	I0815 00:20:33.558379   97803 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-808459-m04
	I0815 00:20:33.575504   97803 host.go:66] Checking if "ha-808459-m04" exists ...
	I0815 00:20:33.575764   97803 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 00:20:33.575806   97803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-808459-m04
	I0815 00:20:33.592574   97803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19443-25263/.minikube/machines/ha-808459-m04/id_rsa Username:docker}
	I0815 00:20:33.683047   97803 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 00:20:33.692836   97803 status.go:257] ha-808459-m04 status: &{Name:ha-808459-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (31.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 node start m02 -v=7 --alsologtostderr
E0815 00:20:58.864042   32105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-808459 node start m02 -v=7 --alsologtostderr: (30.768046811s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (31.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (6.391985159s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (149.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-808459 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-808459 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-808459 -v=7 --alsologtostderr: (36.427626981s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-808459 --wait=true -v=7 --alsologtostderr
E0815 00:22:04.349716   32105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/functional-906828/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:22:04.356136   32105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/functional-906828/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:22:04.367476   32105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/functional-906828/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:22:04.388833   32105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/functional-906828/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:22:04.430166   32105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/functional-906828/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:22:04.511571   32105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/functional-906828/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:22:04.673078   32105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/functional-906828/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:22:04.994711   32105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/functional-906828/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:22:05.636937   32105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/functional-906828/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:22:06.918994   32105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/functional-906828/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:22:09.481034   32105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/functional-906828/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:22:14.603021   32105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/functional-906828/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:22:24.844366   32105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/functional-906828/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:22:45.326260   32105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/functional-906828/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:23:15.004116   32105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:23:26.288058   32105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/functional-906828/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-808459 --wait=true -v=7 --alsologtostderr: (1m53.341688251s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-808459
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (149.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 node delete m03 -v=7 --alsologtostderr
E0815 00:23:42.706368   32105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-808459 node delete m03 -v=7 --alsologtostderr: (10.462485212s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-808459 stop -v=7 --alsologtostderr: (35.354029496s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-808459 status -v=7 --alsologtostderr: exit status 7 (95.153101ms)

                                                
                                                
-- stdout --
	ha-808459
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-808459-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-808459-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 00:24:29.089083  114768 out.go:291] Setting OutFile to fd 1 ...
	I0815 00:24:29.089196  114768 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:24:29.089206  114768 out.go:304] Setting ErrFile to fd 2...
	I0815 00:24:29.089213  114768 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:24:29.089421  114768 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-25263/.minikube/bin
	I0815 00:24:29.089593  114768 out.go:298] Setting JSON to false
	I0815 00:24:29.089618  114768 mustload.go:65] Loading cluster: ha-808459
	I0815 00:24:29.089715  114768 notify.go:220] Checking for updates...
	I0815 00:24:29.090080  114768 config.go:182] Loaded profile config "ha-808459": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:24:29.090103  114768 status.go:255] checking status of ha-808459 ...
	I0815 00:24:29.090489  114768 cli_runner.go:164] Run: docker container inspect ha-808459 --format={{.State.Status}}
	I0815 00:24:29.109526  114768 status.go:330] ha-808459 host status = "Stopped" (err=<nil>)
	I0815 00:24:29.109549  114768 status.go:343] host is not running, skipping remaining checks
	I0815 00:24:29.109556  114768 status.go:257] ha-808459 status: &{Name:ha-808459 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 00:24:29.109595  114768 status.go:255] checking status of ha-808459-m02 ...
	I0815 00:24:29.109921  114768 cli_runner.go:164] Run: docker container inspect ha-808459-m02 --format={{.State.Status}}
	I0815 00:24:29.127229  114768 status.go:330] ha-808459-m02 host status = "Stopped" (err=<nil>)
	I0815 00:24:29.127253  114768 status.go:343] host is not running, skipping remaining checks
	I0815 00:24:29.127259  114768 status.go:257] ha-808459-m02 status: &{Name:ha-808459-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 00:24:29.127276  114768 status.go:255] checking status of ha-808459-m04 ...
	I0815 00:24:29.127501  114768 cli_runner.go:164] Run: docker container inspect ha-808459-m04 --format={{.State.Status}}
	I0815 00:24:29.143282  114768 status.go:330] ha-808459-m04 host status = "Stopped" (err=<nil>)
	I0815 00:24:29.143299  114768 status.go:343] host is not running, skipping remaining checks
	I0815 00:24:29.143305  114768 status.go:257] ha-808459-m04 status: &{Name:ha-808459-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (103.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-808459 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0815 00:24:48.211151   32105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/functional-906828/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-808459 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m43.040556278s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (103.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (39.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-808459 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-808459 --control-plane -v=7 --alsologtostderr: (38.450017969s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-808459 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (39.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.61s)

                                                
                                    
x
+
TestJSONOutput/start/Command (38.82s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-020035 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0815 00:27:04.352201   32105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/functional-906828/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:27:32.053091   32105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/functional-906828/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-020035 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (38.816191871s)
--- PASS: TestJSONOutput/start/Command (38.82s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-020035 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.56s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-020035 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.56s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.69s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-020035 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-020035 --output=json --user=testUser: (5.68941006s)
--- PASS: TestJSONOutput/stop/Command (5.69s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-399307 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-399307 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (57.595145ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7df7dec0-57d6-4912-ad6e-7053dd18f27c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-399307] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"39e21efd-0d26-4b27-ae1b-f8f8c5f230bc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19443"}}
	{"specversion":"1.0","id":"160c37a5-0cd9-4f03-a358-e9e59d67c3f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c6ce6342-986e-4bee-834e-06e0d318994b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19443-25263/kubeconfig"}}
	{"specversion":"1.0","id":"6d41ab8e-4042-46c8-b9db-c1e62469215b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-25263/.minikube"}}
	{"specversion":"1.0","id":"a62a7700-567e-46ee-83b9-0763ad4ff9a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"068f3424-ead8-4f99-bc09-1b6cbb2e61b7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7b7ca78e-7c5a-4549-b40d-2433b94ca7ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-399307" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-399307
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (26.71s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-302582 --network=
E0815 00:28:15.004448   32105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-302582 --network=: (24.683752224s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-302582" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-302582
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-302582: (2.014534692s)
--- PASS: TestKicCustomNetwork/create_custom_network (26.71s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (22.89s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-141015 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-141015 --network=bridge: (21.053674047s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-141015" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-141015
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-141015: (1.815015188s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (22.89s)

                                                
                                    
x
+
TestKicExistingNetwork (25.55s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-096421 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-096421 --network=existing-network: (23.598420259s)
helpers_test.go:175: Cleaning up "existing-network-096421" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-096421
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-096421: (1.817986756s)
--- PASS: TestKicExistingNetwork (25.55s)

                                                
                                    
x
+
TestKicCustomSubnet (22.74s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-596065 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-596065 --subnet=192.168.60.0/24: (20.810977062s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-596065 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-596065" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-596065
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-596065: (1.915844845s)
--- PASS: TestKicCustomSubnet (22.74s)

                                                
                                    
x
+
TestKicStaticIP (22.69s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-588905 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-588905 --static-ip=192.168.200.200: (20.647090005s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-588905 ip
helpers_test.go:175: Cleaning up "static-ip-588905" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-588905
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-588905: (1.926920823s)
--- PASS: TestKicStaticIP (22.69s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (52.92s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-365356 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-365356 --driver=docker  --container-runtime=crio: (22.907323697s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-367759 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-367759 --driver=docker  --container-runtime=crio: (25.024378184s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-365356
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-367759
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-367759" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-367759
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-367759: (1.813704668s)
helpers_test.go:175: Cleaning up "first-365356" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-365356
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-365356: (2.14295946s)
--- PASS: TestMinikubeProfile (52.92s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.41s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-121544 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-121544 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.405100353s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.41s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-121544 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.23s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.94s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-132699 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-132699 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.938423827s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.94s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-132699 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.56s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-121544 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-121544 --alsologtostderr -v=5: (1.558725514s)
--- PASS: TestMountStart/serial/DeleteFirst (1.56s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-132699 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.23s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.16s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-132699
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-132699: (1.164045449s)
--- PASS: TestMountStart/serial/Stop (1.16s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.05s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-132699
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-132699: (6.04753209s)
--- PASS: TestMountStart/serial/RestartStopped (7.05s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-132699 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (63.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-193954 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0815 00:32:04.349918   32105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/functional-906828/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-193954 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m3.476505386s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193954 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (63.90s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (2.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-193954 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-193954 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-193954 -- rollout status deployment/busybox: (1.696716693s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-193954 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-193954 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-193954 -- exec busybox-7dff88458-65kkb -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-193954 -- exec busybox-7dff88458-m96fr -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-193954 -- exec busybox-7dff88458-65kkb -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-193954 -- exec busybox-7dff88458-m96fr -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-193954 -- exec busybox-7dff88458-65kkb -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-193954 -- exec busybox-7dff88458-m96fr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (2.97s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-193954 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-193954 -- exec busybox-7dff88458-65kkb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-193954 -- exec busybox-7dff88458-65kkb -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-193954 -- exec busybox-7dff88458-m96fr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-193954 -- exec busybox-7dff88458-m96fr -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.65s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (25.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-193954 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-193954 -v 3 --alsologtostderr: (25.153862568s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193954 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (25.73s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-193954 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193954 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193954 cp testdata/cp-test.txt multinode-193954:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193954 ssh -n multinode-193954 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193954 cp multinode-193954:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3765126441/001/cp-test_multinode-193954.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193954 ssh -n multinode-193954 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193954 cp multinode-193954:/home/docker/cp-test.txt multinode-193954-m02:/home/docker/cp-test_multinode-193954_multinode-193954-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193954 ssh -n multinode-193954 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193954 ssh -n multinode-193954-m02 "sudo cat /home/docker/cp-test_multinode-193954_multinode-193954-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193954 cp multinode-193954:/home/docker/cp-test.txt multinode-193954-m03:/home/docker/cp-test_multinode-193954_multinode-193954-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193954 ssh -n multinode-193954 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193954 ssh -n multinode-193954-m03 "sudo cat /home/docker/cp-test_multinode-193954_multinode-193954-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193954 cp testdata/cp-test.txt multinode-193954-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193954 ssh -n multinode-193954-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193954 cp multinode-193954-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3765126441/001/cp-test_multinode-193954-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193954 ssh -n multinode-193954-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193954 cp multinode-193954-m02:/home/docker/cp-test.txt multinode-193954:/home/docker/cp-test_multinode-193954-m02_multinode-193954.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193954 ssh -n multinode-193954-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193954 ssh -n multinode-193954 "sudo cat /home/docker/cp-test_multinode-193954-m02_multinode-193954.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193954 cp multinode-193954-m02:/home/docker/cp-test.txt multinode-193954-m03:/home/docker/cp-test_multinode-193954-m02_multinode-193954-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193954 ssh -n multinode-193954-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193954 ssh -n multinode-193954-m03 "sudo cat /home/docker/cp-test_multinode-193954-m02_multinode-193954-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193954 cp testdata/cp-test.txt multinode-193954-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193954 ssh -n multinode-193954-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193954 cp multinode-193954-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3765126441/001/cp-test_multinode-193954-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193954 ssh -n multinode-193954-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193954 cp multinode-193954-m03:/home/docker/cp-test.txt multinode-193954:/home/docker/cp-test_multinode-193954-m03_multinode-193954.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193954 ssh -n multinode-193954-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193954 ssh -n multinode-193954 "sudo cat /home/docker/cp-test_multinode-193954-m03_multinode-193954.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193954 cp multinode-193954-m03:/home/docker/cp-test.txt multinode-193954-m02:/home/docker/cp-test_multinode-193954-m03_multinode-193954-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193954 ssh -n multinode-193954-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193954 ssh -n multinode-193954-m02 "sudo cat /home/docker/cp-test_multinode-193954-m03_multinode-193954-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.60s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193954 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-193954 node stop m03: (1.158658302s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193954 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-193954 status: exit status 7 (446.382462ms)

                                                
                                                
-- stdout --
	multinode-193954
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-193954-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-193954-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193954 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-193954 status --alsologtostderr: exit status 7 (438.579596ms)

                                                
                                                
-- stdout --
	multinode-193954
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-193954-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-193954-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 00:32:53.587166  179960 out.go:291] Setting OutFile to fd 1 ...
	I0815 00:32:53.587415  179960 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:32:53.587424  179960 out.go:304] Setting ErrFile to fd 2...
	I0815 00:32:53.587428  179960 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:32:53.587617  179960 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-25263/.minikube/bin
	I0815 00:32:53.587762  179960 out.go:298] Setting JSON to false
	I0815 00:32:53.587782  179960 mustload.go:65] Loading cluster: multinode-193954
	I0815 00:32:53.587883  179960 notify.go:220] Checking for updates...
	I0815 00:32:53.588152  179960 config.go:182] Loaded profile config "multinode-193954": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:32:53.588168  179960 status.go:255] checking status of multinode-193954 ...
	I0815 00:32:53.588517  179960 cli_runner.go:164] Run: docker container inspect multinode-193954 --format={{.State.Status}}
	I0815 00:32:53.605600  179960 status.go:330] multinode-193954 host status = "Running" (err=<nil>)
	I0815 00:32:53.605639  179960 host.go:66] Checking if "multinode-193954" exists ...
	I0815 00:32:53.605937  179960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-193954
	I0815 00:32:53.622496  179960 host.go:66] Checking if "multinode-193954" exists ...
	I0815 00:32:53.622702  179960 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 00:32:53.622744  179960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-193954
	I0815 00:32:53.638291  179960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/19443-25263/.minikube/machines/multinode-193954/id_rsa Username:docker}
	I0815 00:32:53.730299  179960 ssh_runner.go:195] Run: systemctl --version
	I0815 00:32:53.734105  179960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 00:32:53.743562  179960 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 00:32:53.790056  179960 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:62 SystemTime:2024-08-15 00:32:53.781392136 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0815 00:32:53.790686  179960 kubeconfig.go:125] found "multinode-193954" server: "https://192.168.67.2:8443"
	I0815 00:32:53.790710  179960 api_server.go:166] Checking apiserver status ...
	I0815 00:32:53.790744  179960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 00:32:53.800711  179960 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1495/cgroup
	I0815 00:32:53.808656  179960 api_server.go:182] apiserver freezer: "10:freezer:/docker/1a730a7a337903b9e094bbcec0396b0d27b599bedf5385665b6039077dd2eb5c/crio/crio-732ea2e70326a082fd0f384345bc57fe16df3e8c3014950ab9f73f6f032a8b00"
	I0815 00:32:53.808698  179960 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/1a730a7a337903b9e094bbcec0396b0d27b599bedf5385665b6039077dd2eb5c/crio/crio-732ea2e70326a082fd0f384345bc57fe16df3e8c3014950ab9f73f6f032a8b00/freezer.state
	I0815 00:32:53.815736  179960 api_server.go:204] freezer state: "THAWED"
	I0815 00:32:53.815761  179960 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0815 00:32:53.819234  179960 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0815 00:32:53.819253  179960 status.go:422] multinode-193954 apiserver status = Running (err=<nil>)
	I0815 00:32:53.819265  179960 status.go:257] multinode-193954 status: &{Name:multinode-193954 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 00:32:53.819287  179960 status.go:255] checking status of multinode-193954-m02 ...
	I0815 00:32:53.819546  179960 cli_runner.go:164] Run: docker container inspect multinode-193954-m02 --format={{.State.Status}}
	I0815 00:32:53.835628  179960 status.go:330] multinode-193954-m02 host status = "Running" (err=<nil>)
	I0815 00:32:53.835644  179960 host.go:66] Checking if "multinode-193954-m02" exists ...
	I0815 00:32:53.835921  179960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-193954-m02
	I0815 00:32:53.851476  179960 host.go:66] Checking if "multinode-193954-m02" exists ...
	I0815 00:32:53.851691  179960 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 00:32:53.851728  179960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-193954-m02
	I0815 00:32:53.868040  179960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19443-25263/.minikube/machines/multinode-193954-m02/id_rsa Username:docker}
	I0815 00:32:53.958435  179960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 00:32:53.968398  179960 status.go:257] multinode-193954-m02 status: &{Name:multinode-193954-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0815 00:32:53.968447  179960 status.go:255] checking status of multinode-193954-m03 ...
	I0815 00:32:53.968705  179960 cli_runner.go:164] Run: docker container inspect multinode-193954-m03 --format={{.State.Status}}
	I0815 00:32:53.985337  179960 status.go:330] multinode-193954-m03 host status = "Stopped" (err=<nil>)
	I0815 00:32:53.985356  179960 status.go:343] host is not running, skipping remaining checks
	I0815 00:32:53.985362  179960 status.go:257] multinode-193954-m03 status: &{Name:multinode-193954-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.04s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193954 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-193954 node start m03 -v=7 --alsologtostderr: (8.448180859s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193954 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.08s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (101.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-193954
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-193954
E0815 00:33:15.004838   32105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-193954: (24.569828307s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-193954 --wait=true -v=8 --alsologtostderr
E0815 00:34:38.068321   32105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-193954 --wait=true -v=8 --alsologtostderr: (1m17.025622188s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-193954
--- PASS: TestMultiNode/serial/RestartKeepsNodes (101.68s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193954 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-193954 node delete m03: (4.622251636s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193954 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.15s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193954 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-193954 stop: (23.445933795s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193954 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-193954 status: exit status 7 (78.600113ms)

                                                
                                                
-- stdout --
	multinode-193954
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-193954-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193954 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-193954 status --alsologtostderr: exit status 7 (74.723186ms)

                                                
                                                
-- stdout --
	multinode-193954
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-193954-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 00:35:13.465739  189713 out.go:291] Setting OutFile to fd 1 ...
	I0815 00:35:13.465995  189713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:35:13.466004  189713 out.go:304] Setting ErrFile to fd 2...
	I0815 00:35:13.466008  189713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:35:13.466174  189713 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-25263/.minikube/bin
	I0815 00:35:13.466321  189713 out.go:298] Setting JSON to false
	I0815 00:35:13.466341  189713 mustload.go:65] Loading cluster: multinode-193954
	I0815 00:35:13.466429  189713 notify.go:220] Checking for updates...
	I0815 00:35:13.466724  189713 config.go:182] Loaded profile config "multinode-193954": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:35:13.466744  189713 status.go:255] checking status of multinode-193954 ...
	I0815 00:35:13.467273  189713 cli_runner.go:164] Run: docker container inspect multinode-193954 --format={{.State.Status}}
	I0815 00:35:13.485097  189713 status.go:330] multinode-193954 host status = "Stopped" (err=<nil>)
	I0815 00:35:13.485131  189713 status.go:343] host is not running, skipping remaining checks
	I0815 00:35:13.485139  189713 status.go:257] multinode-193954 status: &{Name:multinode-193954 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 00:35:13.485161  189713 status.go:255] checking status of multinode-193954-m02 ...
	I0815 00:35:13.485373  189713 cli_runner.go:164] Run: docker container inspect multinode-193954-m02 --format={{.State.Status}}
	I0815 00:35:13.500182  189713 status.go:330] multinode-193954-m02 host status = "Stopped" (err=<nil>)
	I0815 00:35:13.500227  189713 status.go:343] host is not running, skipping remaining checks
	I0815 00:35:13.500239  189713 status.go:257] multinode-193954-m02 status: &{Name:multinode-193954-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.60s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (50.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-193954 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-193954 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (49.612856873s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193954 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (50.15s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (21.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-193954
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-193954-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-193954-m02 --driver=docker  --container-runtime=crio: exit status 14 (59.63325ms)

                                                
                                                
-- stdout --
	* [multinode-193954-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19443
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19443-25263/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-25263/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-193954-m02' is duplicated with machine name 'multinode-193954-m02' in profile 'multinode-193954'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-193954-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-193954-m03 --driver=docker  --container-runtime=crio: (19.701523556s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-193954
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-193954: exit status 80 (257.552285ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-193954 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-193954-m03 already exists in multinode-193954-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-193954-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-193954-m03: (1.804950736s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (21.86s)

                                                
                                    
x
+
TestPreload (99.04s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-104314 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0815 00:37:04.352307   32105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/functional-906828/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-104314 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m14.103243982s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-104314 image pull gcr.io/k8s-minikube/busybox
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-104314
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-104314: (5.6956361s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-104314 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-104314 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (16.220661355s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-104314 image list
helpers_test.go:175: Cleaning up "test-preload-104314" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-104314
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-104314: (1.879409125s)
--- PASS: TestPreload (99.04s)

                                                
                                    
x
+
TestScheduledStopUnix (98.69s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-592139 --memory=2048 --driver=docker  --container-runtime=crio
E0815 00:38:15.004840   32105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:38:27.415354   32105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/functional-906828/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-592139 --memory=2048 --driver=docker  --container-runtime=crio: (22.655128718s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-592139 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-592139 -n scheduled-stop-592139
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-592139 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-592139 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-592139 -n scheduled-stop-592139
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-592139
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-592139 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-592139
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-592139: exit status 7 (55.887807ms)

                                                
                                                
-- stdout --
	scheduled-stop-592139
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-592139 -n scheduled-stop-592139
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-592139 -n scheduled-stop-592139: exit status 7 (57.740125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-592139" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-592139
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-592139: (4.837482372s)
--- PASS: TestScheduledStopUnix (98.69s)

                                                
                                    
x
+
TestInsufficientStorage (9.43s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-877492 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-877492 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.147839531s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4062d267-0a0d-491b-95a6-4449f98c4a19","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-877492] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c8fa15be-f9e5-4308-bf90-b1057aafe064","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19443"}}
	{"specversion":"1.0","id":"149c1d15-167e-4ed9-a29e-c6a6ca5861d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ec9da14b-9cab-49b5-ae44-bde73897f004","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19443-25263/kubeconfig"}}
	{"specversion":"1.0","id":"959ae54d-1024-4841-aa87-cb56df82b4f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-25263/.minikube"}}
	{"specversion":"1.0","id":"25510607-2fc2-43e3-898a-a0248ac8076f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"496bb569-1c5f-4a73-9111-41dbdd598d91","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d763a4ec-7ad1-4e9c-9e3f-f22deda2cd10","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"928e0af1-e792-480f-8adb-4614d68a5572","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"1b1cc6d6-8264-4ed0-bf63-861af8f49d60","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"ef9fc4e5-68e6-454b-9f89-8a36c13099b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"4a3245ab-3fd9-4d5b-8c13-e03e5431b6a6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-877492\" primary control-plane node in \"insufficient-storage-877492\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"0f28b416-520e-42c7-9da1-2de8ebde4747","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1723650208-19443 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"8a9a2460-b461-44e0-852c-86231cb07160","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"cfdc0329-00b1-41b6-9de3-396530ef950c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-877492 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-877492 --output=json --layout=cluster: exit status 7 (249.066258ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-877492","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-877492","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 00:39:54.448011  212098 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-877492" does not appear in /home/jenkins/minikube-integration/19443-25263/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-877492 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-877492 --output=json --layout=cluster: exit status 7 (245.091433ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-877492","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-877492","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 00:39:54.693963  212195 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-877492" does not appear in /home/jenkins/minikube-integration/19443-25263/kubeconfig
	E0815 00:39:54.703035  212195 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/insufficient-storage-877492/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-877492" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-877492
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-877492: (1.790644178s)
--- PASS: TestInsufficientStorage (9.43s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (54.5s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3300337733 start -p running-upgrade-402708 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3300337733 start -p running-upgrade-402708 --memory=2200 --vm-driver=docker  --container-runtime=crio: (20.661822888s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-402708 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-402708 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (30.998800703s)
helpers_test.go:175: Cleaning up "running-upgrade-402708" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-402708
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-402708: (2.41262111s)
--- PASS: TestRunningBinaryUpgrade (54.50s)

                                                
                                    
x
+
TestKubernetesUpgrade (356.32s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-425687 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-425687 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (43.901474529s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-425687
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-425687: (3.724178701s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-425687 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-425687 status --format={{.Host}}: exit status 7 (79.674988ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-425687 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-425687 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m26.139581713s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-425687 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-425687 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-425687 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (77.574835ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-425687] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19443
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19443-25263/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-25263/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-425687
	    minikube start -p kubernetes-upgrade-425687 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4256872 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-425687 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-425687 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-425687 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (40.247918503s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-425687" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-425687
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-425687: (2.08088181s)
--- PASS: TestKubernetesUpgrade (356.32s)

                                                
                                    
x
+
TestMissingContainerUpgrade (129.78s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.776672342 start -p missing-upgrade-541500 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.776672342 start -p missing-upgrade-541500 --memory=2200 --driver=docker  --container-runtime=crio: (1m1.319936144s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-541500
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-541500: (10.816137051s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-541500
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-541500 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-541500 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (55.105483463s)
helpers_test.go:175: Cleaning up "missing-upgrade-541500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-541500
E0815 00:42:04.351197   32105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/functional-906828/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-541500: (1.985458431s)
--- PASS: TestMissingContainerUpgrade (129.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-585855 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-585855 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (80.865647ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-585855] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19443
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19443-25263/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-25263/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (38.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-585855 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-585855 --driver=docker  --container-runtime=crio: (37.672316513s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-585855 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (38.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-585855 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-585855 --no-kubernetes --driver=docker  --container-runtime=crio: (4.170849421s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-585855 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-585855 status -o json: exit status 2 (278.422262ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-585855","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-585855
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-585855: (3.03006447s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (7.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (4.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-585855 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-585855 --no-kubernetes --driver=docker  --container-runtime=crio: (4.627470023s)
--- PASS: TestNoKubernetes/serial/Start (4.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-585855 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-585855 "sudo systemctl is-active --quiet service kubelet": exit status 1 (260.351005ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (7.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (6.410239885s)
--- PASS: TestNoKubernetes/serial/ProfileList (7.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-585855
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-585855: (1.904349547s)
--- PASS: TestNoKubernetes/serial/Stop (1.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-585855 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-585855 --driver=docker  --container-runtime=crio: (6.596850006s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-585855 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-585855 "sudo systemctl is-active --quiet service kubelet": exit status 1 (274.388755ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.51s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.51s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (57.37s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.569828709 start -p stopped-upgrade-812137 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.569828709 start -p stopped-upgrade-812137 --memory=2200 --vm-driver=docker  --container-runtime=crio: (25.488145068s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.569828709 -p stopped-upgrade-812137 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.569828709 -p stopped-upgrade-812137 stop: (5.651355906s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-812137 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-812137 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (26.23456073s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (57.37s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.74s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-812137
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-921766 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-921766 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (141.501416ms)

                                                
                                                
-- stdout --
	* [false-921766] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19443
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19443-25263/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-25263/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 00:42:38.149085  258451 out.go:291] Setting OutFile to fd 1 ...
	I0815 00:42:38.149176  258451 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:42:38.149183  258451 out.go:304] Setting ErrFile to fd 2...
	I0815 00:42:38.149187  258451 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:42:38.149336  258451 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-25263/.minikube/bin
	I0815 00:42:38.149903  258451 out.go:298] Setting JSON to false
	I0815 00:42:38.151001  258451 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":8695,"bootTime":1723673863,"procs":305,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 00:42:38.151060  258451 start.go:139] virtualization: kvm guest
	I0815 00:42:38.153372  258451 out.go:177] * [false-921766] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 00:42:38.154741  258451 notify.go:220] Checking for updates...
	I0815 00:42:38.154771  258451 out.go:177]   - MINIKUBE_LOCATION=19443
	I0815 00:42:38.156197  258451 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 00:42:38.157592  258451 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19443-25263/kubeconfig
	I0815 00:42:38.159060  258451 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-25263/.minikube
	I0815 00:42:38.160305  258451 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 00:42:38.161728  258451 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 00:42:38.163635  258451 config.go:182] Loaded profile config "cert-expiration-808958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:42:38.163775  258451 config.go:182] Loaded profile config "kubernetes-upgrade-425687": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:42:38.163904  258451 config.go:182] Loaded profile config "running-upgrade-402708": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0815 00:42:38.164036  258451 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 00:42:38.189644  258451 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0815 00:42:38.189765  258451 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 00:42:38.240511  258451 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:true NGoroutines:76 SystemTime:2024-08-15 00:42:38.231451513 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0815 00:42:38.240615  258451 docker.go:307] overlay module found
	I0815 00:42:38.242747  258451 out.go:177] * Using the docker driver based on user configuration
	I0815 00:42:38.244086  258451 start.go:297] selected driver: docker
	I0815 00:42:38.244104  258451 start.go:901] validating driver "docker" against <nil>
	I0815 00:42:38.244118  258451 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 00:42:38.246649  258451 out.go:177] 
	W0815 00:42:38.248228  258451 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0815 00:42:38.249676  258451 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-921766 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-921766

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-921766

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-921766

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-921766

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-921766

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-921766

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-921766

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-921766

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-921766

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-921766

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921766"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921766"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921766"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-921766

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921766"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921766"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-921766" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-921766" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-921766" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-921766" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-921766" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-921766" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-921766" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-921766" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921766"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921766"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921766"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921766"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921766"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-921766" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-921766" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-921766" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921766"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921766"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921766"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921766"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921766"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19443-25263/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 15 Aug 2024 00:41:15 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.85.2:8443
name: cert-expiration-808958
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19443-25263/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 15 Aug 2024 00:42:03 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.94.2:8443
name: kubernetes-upgrade-425687
contexts:
- context:
cluster: cert-expiration-808958
extensions:
- extension:
last-update: Thu, 15 Aug 2024 00:41:15 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: cert-expiration-808958
name: cert-expiration-808958
- context:
cluster: kubernetes-upgrade-425687
user: kubernetes-upgrade-425687
name: kubernetes-upgrade-425687
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-808958
user:
client-certificate: /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/cert-expiration-808958/client.crt
client-key: /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/cert-expiration-808958/client.key
- name: kubernetes-upgrade-425687
user:
client-certificate: /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/kubernetes-upgrade-425687/client.crt
client-key: /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/kubernetes-upgrade-425687/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-921766

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921766"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921766"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921766"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921766"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921766"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921766"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921766"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921766"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921766"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921766"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921766"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921766"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921766"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921766"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921766"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921766"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921766"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921766"

                                                
                                                
----------------------- debugLogs end: false-921766 [took: 2.981856949s] --------------------------------
helpers_test.go:175: Cleaning up "false-921766" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-921766
--- PASS: TestNetworkPlugins/group/false (3.30s)

                                                
                                    
x
+
TestPause/serial/Start (47.2s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-091554 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-091554 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (47.196824602s)
--- PASS: TestPause/serial/Start (47.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (128.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-100140 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E0815 00:43:15.004467   32105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-100140 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m8.054243108s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (128.05s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (30.77s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-091554 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-091554 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (30.75527778s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (30.77s)

                                                
                                    
x
+
TestPause/serial/Pause (0.66s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-091554 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.66s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.29s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-091554 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-091554 --output=json --layout=cluster: exit status 2 (289.559804ms)

                                                
                                                
-- stdout --
	{"Name":"pause-091554","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-091554","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.29s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.6s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-091554 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.60s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.71s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-091554 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.71s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.54s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-091554 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-091554 --alsologtostderr -v=5: (2.537590643s)
--- PASS: TestPause/serial/DeletePaused (2.54s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (14.75s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (14.698797147s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-091554
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-091554: exit status 1 (18.530953ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-091554: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (14.75s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (57.87s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-979702 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-979702 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (57.870076479s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (57.87s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (44.85s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-265272 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-265272 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (44.853920912s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (44.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-100140 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [572af580-0add-445e-b69c-f710750fc3c7] Pending
helpers_test.go:344: "busybox" [572af580-0add-445e-b69c-f710750fc3c7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [572af580-0add-445e-b69c-f710750fc3c7] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003035567s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-100140 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-265272 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3c1e6996-1074-445e-b527-17c16ff5efc6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3c1e6996-1074-445e-b527-17c16ff5efc6] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.003434211s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-265272 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-100140 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-100140 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-100140 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-100140 --alsologtostderr -v=3: (11.972840408s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-979702 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [92882bdc-a6c0-49ef-941b-17c6ea443257] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [92882bdc-a6c0-49ef-941b-17c6ea443257] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003148172s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-979702 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.79s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-265272 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-265272 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.79s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.81s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-265272 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-265272 --alsologtostderr -v=3: (11.810289597s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.81s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-979702 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-979702 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-100140 -n old-k8s-version-100140
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-100140 -n old-k8s-version-100140: exit status 7 (62.18151ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-100140 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (144.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-100140 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-100140 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m24.414097428s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-100140 -n old-k8s-version-100140
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (144.70s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-979702 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-979702 --alsologtostderr -v=3: (11.856487545s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.86s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-265272 -n embed-certs-265272
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-265272 -n embed-certs-265272: exit status 7 (61.665869ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-265272 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (276.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-265272 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-265272 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (4m36.644258077s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-265272 -n embed-certs-265272
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (276.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-979702 -n no-preload-979702
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-979702 -n no-preload-979702: exit status 7 (78.92356ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-979702 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (262.72s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-979702 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-979702 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (4m22.425782162s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-979702 -n no-preload-979702
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (262.72s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (43.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-572898 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
E0815 00:47:04.349730   32105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/functional-906828/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-572898 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (43.365393448s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (43.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-572898 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [03f92038-d7ad-43d4-ac46-4ec7bdbba01c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [03f92038-d7ad-43d4-ac46-4ec7bdbba01c] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004404485s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-572898 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-572898 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-572898 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-572898 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-572898 --alsologtostderr -v=3: (11.815675383s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-kcv4c" [e18de267-feaa-4176-92ba-e612bf214499] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004126519s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-kcv4c" [e18de267-feaa-4176-92ba-e612bf214499] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004012572s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-100140 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-572898 -n default-k8s-diff-port-572898
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-572898 -n default-k8s-diff-port-572898: exit status 7 (61.443714ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-572898 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (263.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-572898 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-572898 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (4m22.955933573s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-572898 -n default-k8s-diff-port-572898
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (263.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-100140 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-100140 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-100140 -n old-k8s-version-100140
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-100140 -n old-k8s-version-100140: exit status 2 (276.669725ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-100140 -n old-k8s-version-100140
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-100140 -n old-k8s-version-100140: exit status 2 (272.462492ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-100140 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-100140 -n old-k8s-version-100140
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-100140 -n old-k8s-version-100140
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (30.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-401008 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
E0815 00:48:15.004786   32105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-401008 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (30.351162162s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (30.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-401008 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-401008 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.027751104s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-401008 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-401008 --alsologtostderr -v=3: (1.331984939s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-401008 -n newest-cni-401008
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-401008 -n newest-cni-401008: exit status 7 (59.401868ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-401008 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (13.65s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-401008 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-401008 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (13.363340604s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-401008 -n newest-cni-401008
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (13.65s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-401008 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-401008 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-401008 -n newest-cni-401008
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-401008 -n newest-cni-401008: exit status 2 (290.621623ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-401008 -n newest-cni-401008
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-401008 -n newest-cni-401008: exit status 2 (276.438158ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-401008 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-401008 -n newest-cni-401008
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-401008 -n newest-cni-401008
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (43.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-921766 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-921766 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (43.449899318s)
--- PASS: TestNetworkPlugins/group/auto/Start (43.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-921766 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-921766 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-6sf9z" [88c9f780-b302-457f-82c2-703da86b12bc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-6sf9z" [88c9f780-b302-457f-82c2-703da86b12bc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.003966612s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-921766 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-921766 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-921766 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-5fld4" [091eea84-c569-472e-80e8-627473e231e7] Running
E0815 00:50:09.096514   32105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/old-k8s-version-100140/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:50:09.102901   32105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/old-k8s-version-100140/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:50:09.114246   32105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/old-k8s-version-100140/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:50:09.135588   32105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/old-k8s-version-100140/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:50:09.176954   32105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/old-k8s-version-100140/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:50:09.258971   32105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/old-k8s-version-100140/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:50:09.420235   32105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/old-k8s-version-100140/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:50:09.741915   32105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/old-k8s-version-100140/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:50:10.383497   32105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/old-k8s-version-100140/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003468999s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-5fld4" [091eea84-c569-472e-80e8-627473e231e7] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003776568s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-979702 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (42.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-921766 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E0815 00:50:14.226314   32105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/old-k8s-version-100140/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-921766 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (42.491329768s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (42.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-kvhs4" [227b079b-e655-41df-9097-6f4ccae5012a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004752205s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-979702 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-979702 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-979702 --alsologtostderr -v=1: (1.058881189s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-979702 -n no-preload-979702
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-979702 -n no-preload-979702: exit status 2 (283.773624ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-979702 -n no-preload-979702
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-979702 -n no-preload-979702: exit status 2 (278.141357ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-979702 --alsologtostderr -v=1
E0815 00:50:19.348184   32105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/old-k8s-version-100140/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-979702 -n no-preload-979702
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-979702 -n no-preload-979702
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-kvhs4" [227b079b-e655-41df-9097-6f4ccae5012a] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004187392s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-265272 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (56.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-921766 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-921766 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (56.222751919s)
--- PASS: TestNetworkPlugins/group/calico/Start (56.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-265272 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-265272 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-265272 -n embed-certs-265272
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-265272 -n embed-certs-265272: exit status 2 (290.808712ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-265272 -n embed-certs-265272
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-265272 -n embed-certs-265272: exit status 2 (290.0872ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-265272 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-265272 -n embed-certs-265272
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-265272 -n embed-certs-265272
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (45.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-921766 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E0815 00:50:50.072511   32105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/old-k8s-version-100140/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-921766 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (45.522393092s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (45.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-f9jjs" [36f47c06-0bf6-4555-b190-aa0ba72f39bb] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003833828s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-921766 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (8.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-921766 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-w7z9m" [30af1a17-6988-4140-8c3c-5f4b100ab853] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-w7z9m" [30af1a17-6988-4140-8c3c-5f4b100ab853] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 8.003987143s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (8.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-921766 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-921766 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-921766 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-921766 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-921766 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-tz7g4" [a292cbb1-8cab-430d-afe4-9f6f96f3a4d5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0815 00:51:18.070150   32105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/addons-877132/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-tz7g4" [a292cbb1-8cab-430d-afe4-9f6f96f3a4d5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.036175568s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-njjss" [53643957-f94d-4610-b92d-ba8acb0fe3c3] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004240135s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-921766 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-921766 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-hd4dr" [1e1081be-1da4-4cb8-9e2e-751f05751f1c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-hd4dr" [1e1081be-1da4-4cb8-9e2e-751f05751f1c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.003373166s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-921766 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-921766 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-921766 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (33.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-921766 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E0815 00:51:31.033875   32105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/old-k8s-version-100140/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-921766 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (33.759368139s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (33.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-921766 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-921766 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-921766 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (47.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-921766 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-921766 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (47.502755223s)
--- PASS: TestNetworkPlugins/group/flannel/Start (47.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (62.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-921766 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E0815 00:52:04.349666   32105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/functional-906828/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-921766 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m2.111775663s)
--- PASS: TestNetworkPlugins/group/bridge/Start (62.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-921766 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-921766 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-t822k" [37b792b6-20b6-430d-aabc-a14206418ddc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-t822k" [37b792b6-20b6-430d-aabc-a14206418ddc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004660804s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-921766 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-921766 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-921766 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-zpv9g" [213aa574-60ef-4667-a996-7d93406df618] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003320353s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-zpv9g" [213aa574-60ef-4667-a996-7d93406df618] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004166614s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-572898 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-kgxqc" [fe529313-d84b-42f9-ba1b-fde0d16036c4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00468782s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-572898 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-572898 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-572898 -n default-k8s-diff-port-572898
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-572898 -n default-k8s-diff-port-572898: exit status 2 (271.885584ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-572898 -n default-k8s-diff-port-572898
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-572898 -n default-k8s-diff-port-572898: exit status 2 (279.49589ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-572898 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-572898 -n default-k8s-diff-port-572898
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-572898 -n default-k8s-diff-port-572898
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-921766 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-921766 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-92t2n" [f6ffe419-9e9d-4de3-9ce4-4c2dd218977a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-92t2n" [f6ffe419-9e9d-4de3-9ce4-4c2dd218977a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.003081022s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-921766 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-921766 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-921766 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-921766 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-921766 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-xp796" [186f52b7-7435-4b31-9fec-06303bbb20c4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-xp796" [186f52b7-7435-4b31-9fec-06303bbb20c4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004062871s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-921766 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-921766 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-921766 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.10s)

                                                
                                    

Test skip (25/328)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-416773" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-416773
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-921766 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-921766

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-921766

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-921766

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-921766

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-921766

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-921766

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-921766

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-921766

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-921766

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-921766

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921766"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921766"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921766"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-921766

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921766"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921766"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-921766" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-921766" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-921766" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-921766" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-921766" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-921766" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-921766" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-921766" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921766"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921766"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921766"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921766"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921766"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-921766" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-921766" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-921766" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921766"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921766"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921766"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921766"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921766"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19443-25263/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 15 Aug 2024 00:41:15 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.85.2:8443
name: cert-expiration-808958
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19443-25263/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 15 Aug 2024 00:42:03 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.94.2:8443
name: kubernetes-upgrade-425687
contexts:
- context:
cluster: cert-expiration-808958
extensions:
- extension:
last-update: Thu, 15 Aug 2024 00:41:15 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: cert-expiration-808958
name: cert-expiration-808958
- context:
cluster: kubernetes-upgrade-425687
user: kubernetes-upgrade-425687
name: kubernetes-upgrade-425687
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-808958
user:
client-certificate: /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/cert-expiration-808958/client.crt
client-key: /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/cert-expiration-808958/client.key
- name: kubernetes-upgrade-425687
user:
client-certificate: /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/kubernetes-upgrade-425687/client.crt
client-key: /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/kubernetes-upgrade-425687/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-921766

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921766"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921766"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921766"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921766"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921766"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921766"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921766"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921766"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921766"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921766"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921766"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921766"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921766"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921766"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921766"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921766"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921766"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921766"

                                                
                                                
----------------------- debugLogs end: kubenet-921766 [took: 2.979640028s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-921766" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-921766
--- SKIP: TestNetworkPlugins/group/kubenet (3.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-921766 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-921766

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-921766

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-921766

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-921766

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-921766

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-921766

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-921766

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-921766

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-921766

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-921766

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921766"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921766"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921766"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-921766

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921766"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921766"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-921766" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-921766" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-921766" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-921766" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-921766" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-921766" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-921766" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-921766" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921766"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921766"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921766"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921766"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921766"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-921766

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-921766

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-921766" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-921766" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-921766

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-921766

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-921766" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-921766" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-921766" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-921766" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-921766" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921766"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921766"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921766"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921766"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921766"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19443-25263/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 15 Aug 2024 00:41:15 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.85.2:8443
name: cert-expiration-808958
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19443-25263/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 15 Aug 2024 00:42:03 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.94.2:8443
name: kubernetes-upgrade-425687
contexts:
- context:
cluster: cert-expiration-808958
extensions:
- extension:
last-update: Thu, 15 Aug 2024 00:41:15 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: cert-expiration-808958
name: cert-expiration-808958
- context:
cluster: kubernetes-upgrade-425687
user: kubernetes-upgrade-425687
name: kubernetes-upgrade-425687
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-808958
user:
client-certificate: /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/cert-expiration-808958/client.crt
client-key: /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/cert-expiration-808958/client.key
- name: kubernetes-upgrade-425687
user:
client-certificate: /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/kubernetes-upgrade-425687/client.crt
client-key: /home/jenkins/minikube-integration/19443-25263/.minikube/profiles/kubernetes-upgrade-425687/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-921766

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921766"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921766"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921766"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921766"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921766"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921766"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921766"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921766"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921766"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921766"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921766"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921766"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921766"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921766"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921766"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921766"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921766"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-921766" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921766"

                                                
                                                
----------------------- debugLogs end: cilium-921766 [took: 3.157561479s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-921766" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-921766
--- SKIP: TestNetworkPlugins/group/cilium (3.31s)

                                                
                                    
Copied to clipboard